On the binary images of (8, 5) shortened cyclic codes over GF(2/sup 8/)

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

We consider the binary images of (8, 5) shortened cyclic codes. The (8, 5) shortened cyclic codes have a variety of choices. We have generated about 30000 sample codes with different weight distributions. Let S/sub w/ denote the set of generated sample codes with minimum weight w. The largest minimum weight of sample codes is 8. Let A/sub w/ denote the number of codewords of weight w of a sample code. In S/sub 7/, the smallest of A/sub 7/ is 10 for 10 sample codes and the second smallest of A/sub 7/ is 11 for six sample codes. In S/sub 7/, the smallest of A/sub 8/ is 728 for a sample code and the second smallest of A/sub 8/ is 729 for a sample code. We have chosen two sample codes from each of S/sub 7/ and S/sub 8/ which have the smallest and the second smallest sums of A/sub w/ for 7/spl les/w/spl les/9 in S/sub 7/ and S/sub 8/, respectively. For the AWGN channel using BPSK signaling, we have made simulation to evaluate the decoding error probabilities by a soft-decision decoding based on ordered statistics for the chosen four sample codes at SNR 2.0 to 5.0. These error probabilities are considerably smaller than the optimum error probabilities for (64, 40) subcodes of (64, 42) Reed-Muller code.

Similar Papers
  • Research Article
  • Cite Count Icon 5
  • 10.1109/tcom.1987.1096681
Protection of a Narrow-Band BPSK Communication System with an Adaptive Array
  • Oct 1, 1987
  • IEEE Transactions on Communications
  • M Ganz + 1 more

This paper describes the performance of an adaptive array when used with narrow-band BPSK communication signals; A previous paper [11] described the performance of an adaptive array with a standard BPSK signal when the array bandwidth is several times the signal bandwidth. These earlier results are extended to the case where the array bandwidth is as small as possible, equal to the desired signal symbol rate. To realize such a bandwidth reduction, it is necessary to reshape the BPSK signaling waveform before transmission to prevent intersymbol interference. This is done by passing the BPSK signal through a pulseshaping filter at the transmitter. The performance of the optimal detector for the narrow-band BPSK signal is determined when this detector operates behind an adaptive array that is subjected to CW interference. The bit error probability is obtained as a function of the desired signal and interference powers and arrival angles as well as the array bandwidth.

  • Single Book
  • Cite Count Icon 275
  • 10.1002/0470847824
The Art of Error Correcting Coding
  • Mar 11, 2002
  • Robert H Morelos‐Zaragoza

Preface. Foreword. The ECC web site. 1. Introduction. 1.1 Error correcting coding: Basic concepts. 1.1.1 Block codes and convolutional codes. 1.1.2 Hamming distance, Hamming spheres and error correcting capability. 1.2 Linear block codes. 1.2.1 Generator and parity-check matrices. 1.2.2 The weight is the distance. 1.3 Encoding and decoding of linear block codes. 1.3.1 Encoding with G and H. 1.3.2 Standard array decoding. 1.3.3 Hamming spheres, decoding regions and the standard array. 1.4 Weight distribution and error performance. 1.4.1 Weight distribution and undetected error probability over a BSC. 1.4.2 Performance bounds over BSC, AWGN and fading channels. 1.5 General structure of a hard-decision decoder of linear codes. Problems. 2. Hamming, Golay and Reed-Muller codes. 2.1 Hamming codes. 2.1.1 Encoding and decoding procedures. 2.2 The binary Golay code. 2.2.1 Encoding. 2.2.2 Decoding. 2.2.3 Arithmetic decoding of the extended (24, 12, 8) Golay code. 2.3 Binary Reed-Muller codes. 2.3.1 Boolean polynomials and RM codes. 2.3.2 Finite geometries and majority-logic decoding. Problems. 3. Binary cyclic codes and BCH codes. 3.1 Binary cyclic codes. 3.1.1 Generator and parity-check polynomials. 3.1.2 The generator polynomial. 3.1.3 Encoding and decoding of binary cyclic codes. 3.1.4 The parity-check polynomial. 3.1.5 Shortened cyclic codes and CRC codes. 3.1.6 Fire codes. 3.2 General decoding of cyclic codes. 3.2.1 GF(2m) arithmetic. 3.3 Binary BCH codes. 3.3.1 BCH bound. 3.4 Polynomial codes. 3.5 Decoding of binary BCH codes. 3.5.1 General decoding algorithm for BCH codes. 3.5.2 The Berlekamp-Massey algorithm (BMA). 3.5.3 PGZ decoder. 3.5.4 Euclidean algorithm. 3.5.5 Chien search and error correction. 3.5.6 Errors-and-erasures decoding. 3.6 Weight distribution and performance bounds. 3.6.1 Error performance evaluation. Problems. 4. Nonbinary BCH codes: Reed-Solomon codes. 4.1 RS codes as polynomial codes. 4.2 From binary BCH to RS codes. 4.3 Decoding RS codes. 4.3.1 Remarks on decoding algorithms. 4.3.2 Errors-and-erasures decoding. 4.4 Weight distribution. Problems. 5. Binary convolutional codes. 5.1 Basic structure. 5.1.1 Recursive systematic convolutional codes. 5.1.2 Free distance. 5.2 Connections with block codes. 5.2.1 Zero-tail construction. 5.2.2 Direct-truncation construction. 5.2.3 Tail-biting construction. 5.2.4 Weight distributions. 5.3 Weight enumeration. 5.4 Performance bounds. 5.5 Decoding: Viterbi algorithm with Hamming metrics. 5.5.1 Maximum-likelihood decoding and metrics. 5.5.2 The Viterbi algorithm. 5.5.3 Implementation issues. 5.6 Punctured convolutional codes. 5.6.1 Implementation issues related to punctured convolutional codes. 5.6.2 RCPC codes. Problems. 6. Modifying and combining codes. 6.1 Modifying codes. 6.1.1 Shortening. 6.1.2 Extending. 6.1.3 Puncturing. 6.1.4 Augmenting, expurgating and lengthening. 6.2 Combining codes. 6.2.1 Time sharing of codes. 6.2.2 Direct sums of codes. 6.2.3 The |u|u + v|-construction and related techniques. 6.2.4 Products of codes. 6.2.5 Concatenated codes. 6.2.6 Generalized concatenated codes. 7. Soft-decision decoding. 7.1 Binary transmission over AWGN channels. 7.2 Viterbi algorithm with Euclidean metric. 7.3 Decoding binary linear block codes with a trellis. 7.4 The Chase algorithm. 7.5 Ordered statistics decoding. 7.6 Generalized minimum distance decoding. 7.6.1 Sufficient conditions for optimality. 7.7 List decoding. 7.8 Soft-output algorithms. 7.8.1 Soft-output Viterbi algorithm. 7.8.2 Maximum-a posteriori (MAP) algorithm. 7.8.3 Log-MAP algorithm. 7.8.4 Max-Log-MAP algorithm. 7.8.5 Soft-output OSD algorithm. Problems. 8. Iteratively decodable codes. 8.1 Iterative decoding. 8.2 Product codes. 8.2.1 Parallel concatenation: Turbo codes. 8.2.2 Serial concatenation. 8.2.3 Block product codes. 8.3 Low-density parity-check codes. 8.3.1 Tanner graphs. 8.3.2 Iterative hard-decision decoding: The bit-flip algorithm. 8.3.3 Iterative probabilistic decoding: Belief propagation. Problems. 9. Combining codes and digital modulation. 9.1 Motivation. 9.1.1 Examples of signal sets. 9.1.2 Coded modulation. 9.1.3 Distance considerations. 9.2 Trellis-coded modulation (TCM). 9.2.1 Set partitioning and trellis mapping. 9.2.2 Maximum-likelihood. 9.2.3 Distance considerations and error performance. 9.2.4 Pragmatic TCM and two-stage decoding. 9.3 Multilevel coded modulation. 9.3.1 Constructions and multistage decoding. 9.3.2 Unequal error protection with MCM. 9.4 Bit-interleaved coded modulation. 9.4.1 Gray mapping. 9.4.2 Metric generation: De-mapping. 9.4.3 Interleaving. 9.5 Turbo trellis-coded modulation. 9.5.1 Pragmatic turbo TCM. 9.5.2 Turbo TCM with symbol interleaving. 9.5.3 Turbo TCM with bit interleaving. Problems. Appendix A: Weight distributions of extended BCH codes. A.1 Length 8. A.2 Length 16. A.3 Length 32. A.4 Length 64. A.5 Length 128. Bibliography. Index.

  • Single Book
  • Cite Count Icon 3233
  • 10.1016/s0924-6509(08)x7030-8
The Theory of Error-Correcting Codes
  • Jan 1, 1977
  • F J Macwilliams + 1 more

The Theory of Error-Correcting Codes

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/istc.2016.7593102
Improved minimum weight, girth, and ACE distributions in ensembles of short block length irregular LDPC codes constructed using PEG and cyclic PEG (CPEG) algorithms
  • Sep 1, 2016
  • Umar-Faruk Abdu-Aguye + 2 more

In this paper we introduce a novel progressive edge-growth (PEG) algorithm, the cyclic PEG (CPEG) algorithm. The CPEG algorithm uses an alternative edge establishment sequence to construct low-density parity-check (LDPC) codes. Irregular LDPC codes constructed using the CPEG algorithm have improved girth and approximate cycle extrinsic message degree (ACE) compared to existing PEG algorithms. We also analyze the minimum codeword weight, minimum stopping set weight, local girth, and local ACE distributions for codes in four very large ensembles of irregular LDPC codes. The code ensembles analyzed were constructed using standard PEG, ACE modified standard PEG, CPEG, and ACE modified CPEG algorithms. Modifications to improve the ACE in PEG LDPC codes, by Xiao and Banihashemi, were implemented in the ‘ACE modified’ versions of the PEG algorithms. The ACE modified standard PEG algorithm constructed the code ensemble with the highest minimum codeword weight and minimum stopping set weight distributions, and the ACE modified CPEG algorithm constructed the code ensemble with the highest local girth and ACE distributions. Short block length irregular LDPC codes with good degree distributions which have higher minimum weights than have been published for similar LDPC codes were found in the four code ensembles.

  • Conference Article
  • Cite Count Icon 12
  • 10.1145/2746539.2746575
Reed-Muller Codes for Random Erasures and Errors
  • Jun 14, 2015
  • Emmanuel Abbe + 2 more

This paper studies the parameters for which binary Reed-Muller (RM) codes can be decoded successfully on the BEC and BSC, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity.The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r), the matrix whose rows are truth tables of all monomials of degree ≤ r in m variables. What is the most (resp. least) number of random columns in E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees r, which we use to show that RM codes achieve capacity for erasures in these regimes.Our decoding from random errors follows from the following novel reduction. For every linear code C of sufficiently high rate we construct a new code C' obtained by tensoring C, such that for every subset S of coordinates, if $C$ can recover from erasures in $S$, then C' can recover from errors in S. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate.Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent [27] bounds from constant degree to linear degree polynomials.

  • Research Article
  • Cite Count Icon 67
  • 10.1109/tit.2015.2462817
Reed–Muller Codes for Random Erasures and Errors
  • Oct 1, 2015
  • IEEE Transactions on Information Theory
  • Emmanuel Abbe + 2 more

This paper studies the parameters for which binary Reed-Muller (RM) codes can be decoded successfully on the binary erasure channel and binary symmetry channel, and, in particular, when can they achieve capacity for these two classical channels. Necessarily, this paper also studies the properties of evaluations of multivariate GF(2) polynomials on the random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about the square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m, r), the matrix whose rows are the truth tables of all the monomials of degree ≤ r in m variables. What is the most (resp. least) number of random columns in E(m, r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees r, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code C of sufficiently high rate, we construct a new code C' obtained by tensorizing C, such that for every subset S of coordinates, if C can recover from erasures in S, then C' can recover from errors in S. Specializing this to the RM codes and using our results for erasures imply our result on the unique decoding of the RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent bounds from constant degree to linear degree polynomials.

  • Research Article
  • 10.5075/epfl-thesis-7164
From Polar to Reed-Muller Codes
  • Jan 1, 2016
  • IEEE Transactions on Communications
  • Marco Mondelli

The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper Mathematical Theory of Communication, Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, How much do we need to increase the block length in order to halve the gap between rate and capacity?. We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding primitives, such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance.

  • Research Article
  • Cite Count Icon 244
  • 10.1109/tit.1968.1054127
New generalizations of the Reed-Muller codes--I: Primitive codes
  • Mar 1, 1968
  • IEEE Transactions on Information Theory
  • T Kasami + 2 more

First it is shown that all binary Reed-Muller codes with one digit dropped can be made cyclic by rearranging the digits. Then a natural generalization to the nonbinary case is presented, which also includes the Reed-Muller codes and Reed-Solomon codes as special cases. The generator polynomial is characterized and the minimum weight is established. Finally, some results on weight distribution are given.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.ijleo.2007.06.018
Performance characteristics and weight distribution analysis of turbo product code with Reed–Muller component codes
  • Oct 22, 2007
  • Optik - International Journal for Light and Electron Optics
  • K Ramasamy + 2 more

Performance characteristics and weight distribution analysis of turbo product code with Reed–Muller component codes

  • Research Article
  • 10.1117/12.7971996
Error Modes and Probabilities for UPC Symbol Scanning
  • Aug 1, 1976
  • Optical Engineering
  • B Arlen Young

The modes and probabilities of error in scanning UPC symbols are calculated as functions of noise and printing error. Probabilities of error in character decoding are compared for the different possible modes. A one module error in decoding the T2 interval is seen to be the most likely failure mode. "Convolution distortion" is described for situations where the scanning beam diameter exceeds the width of bars or spaces within the symbol. Examples are given where the probability of a character error increases rapidly with beam diameter. Expressions for detectable and undetectable error probabilities per symbol scan are derived. For Version A symbols, the detectable error rate can be reduced by employing error correction, but at the expense of a higher undetectable error rate. Undetectable error probabilities for Version A symbols are seen to be at least an order of magnitude lower than for Version B and E symbols. The dominant mode for Version A is. the transformation of one-half of the symbol into a Version E symbol. Undetectable error probabilities are shown to increase very rapidly with noise and printing error for all symbol versions.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/itw.2003.1216683
On recursive decoding with sublinear complexity for Reed-Muller codes
  • Aug 4, 2003
  • I Dumer

Reed-Muller (RM) codes (m, r) of length 2/sup m/ are considered on a binary symmetric (BS) channel with high crossover error probability 1/2 -/spl epsiv/. For an arbitrarily small /spl epsiv/>0, new recursive decoding algorithms are designed that retrieve all information bits of RM codes of fixed order r with a vanishing error probability and sublinear complexity of order O(m/sup r+1/). The algorithms utilize a vanishing fraction of the received symbols for both hard- and soft-decision decoding.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/isit50566.2022.9834446
Preserving the Minimum Distance of Polar-Like Codes while Increasing the Information Length
  • Jun 26, 2022
  • Samet Gelincik + 3 more

Reed Muller (RM) codes are known for their good minimum distance. One can use their structure to construct polar-like codes with good distance properties by choosing the information set as the rows of the polarization matrix with the highest Hamming weight, instead of the most reliable synthetic channels. However, the information length options of RM codes are quite limited due to their specific structure. In this work, we present sufficient conditions to increase the information length by at least one bit for some underlying RM codes and in order to obtain pre-transformed polar-like codes with the same minimum distance than lower rate codes. Moreover, our findings are combined with the method presented in [1] to further reduce the number of minimum weight codewords. Numerical results show that the designed codes perform close to the meta-converse bound at short blocklengths and better than the polarized adjusted convolutional polar codes with the same parameters.

  • PDF Download Icon
  • Book Chapter
  • 10.1007/978-3-319-51103-0_2
Soft and Hard Decision Decoding Performance
  • Jan 1, 2017
  • Martin Tomlinson + 4 more

In this chapter, we discuss the performance of codes under soft and hard decision decoding. Upper and lower bounds on hard and soft decision decoding are discussed. For hard decision decoding, evaluation of the performance of specific codes shows that full decoding produces better performance than the usual bounded distance decoder. An analysis of the upper and lower union bounds on the probability of error for maximum likelihood soft decision decoding shows that contrary to hard decision decoding above relative low values of SNR per information bit, the two bounds coincide. The implications of this observation is that the soft decision decoding performance may be determined for a linear code by the number of minimum Hamming weight codewords without the need to determine the weight distribution of the code. Numerical performance comparisons are made for a wide range of different codes. It is shown that the binomial weight distribution provides a good indicative performance for codes whose weight distribution is difficult to obtain.

  • Research Article
  • Cite Count Icon 4
  • 10.1109/tit.2019.2939135
Efficient Multi-Point Local Decoding of Reed-Muller Codes via Interleaved Codex
  • Jan 1, 2020
  • IEEE Transactions on Information Theory
  • Ronald Cramer + 2 more

Reed-Muller codes are among the most important classes of locally correctable codes. Currently local decoding of Reed-Muller codes is based on decoding on lines or quadratic curves to recover one single coordinate. To recover multiple coordinates simultaneously, the naive way is to repeat the local decoding for recovery of a single coordinate. This decoding algorithm might be more expensive, i.e., require higher query complexity. In this paper, we focus on Reed-Muller codes with usual parameter regime, namely, the total degree of evaluation polynomials is $d=\Theta ({q})$ , where $q$ is the code alphabet size (in fact, $d$ can be as big as $q/4$ in our setting). By introducing a novel variation of codex, i.e., interleaved codex (the concept of codex has been used for arithmetic secret sharing), we are able to locally recover arbitrarily large number $k$ of coordinates of a Reed-Muller code simultaneously with error probability $\exp (-\Omega (k))$ at the cost of querying merely $O(q^{2}k)$ coordinates. It turns out that our local decoding of Reed-Muller codes shows ( perhaps surprisingly ) that accessing $k$ locations is in fact cheaper than repeating the procedure for accessing a single location for $k$ times. Precisely speaking, to get the same success probability by repeating the local decoding algorithm of a single coordinate, one has to query $\Omega (qk^{2})$ coordinates. Thus, the query complexity of our local decoding is smaller for $k=\Omega (q)$ . If we impose the same query complexity constraint on both algorithm, our local decoding algorithm yields smaller error probability when $k=\Omega (q^{q})$ . In addition, our local decoding is efficient, i.e., the decoding complexity is ${\mathrm{ Poly}}(k,q)$ . Construction of an interleaved codex is based on concatenation of a codex with a multiplication friendly pair, while the main tool to realize codex is based on algebraic function fields (or more precisely, algebraic geometry codes).

  • Research Article
  • Cite Count Icon 2
  • 10.5555/1455946.1455959
The dual distance of a CRC and bounds on the probability of undetected error, the weight distribution, and the covering radius
  • Apr 1, 2008
  • WSEAS TRANSACTIONS on COMMUNICATIONS archive
  • H D Wacker + 1 more

Dual codes play an important role in the field of error detecting codes on a binary symmetric channel. Via the MacWilliams Identities they can be used to calculate the original code's weight distribution and its probability of undetected error. Moreover, knowledge of the minimum distance of the dual code provides insight in the properties of the weights of a code. In this paper firstly the order of growth of the dual distance of a CRC as a function of the block length n is investigated, and a new lower bound is proven. Then this bound is used to derive a weaker version of the 2-r-bound on the probability of undetected error, and the relationship of this bound to the 2-r-bound is discussed. Estimates of the range of binomiality and the covering radius are given, depending only on the code rate R and the degree r of the generating polynomial of the CRC. In the case of a CRC, two results of Tietavainen are improved. Furthermore, wit is shown that there is binomial behavior of the weight distribution, if only n is large enough. Then, by means of an estimate of the tail of the binomial, another bound on the probability of undetected error is verified. Finally a new version of Sidel'nikov's theorem on the normality of the cumulative distribution function of the weights of a code is presented, where the dual distance is replaced by an expression depending on n and the degree r. In this way the conclusions of the present paper may attribute a new meaning to some well known results about codes with known dual distance and give some new insight in this kind of problems.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.