Abstract

The theory of error-correcting codes was born in 1945 when C. Shannon wrote his landmark paper [1] on the mathematical theory of communicat ion. This, of course, does not mean that there was no notion of the coding of messages before. Although this notion did not take the shape of a mathematical science, it kept producing, from time to time, instructive examples that may be still interesting to the mathematical community because they either present a surprising provisional insight or are of exceptional beauty. Below I intend to discuss some of these episodes. My aim here is not to contest the generally acknowledged priorities, nor do I claim that the discovery of these curiosities is my achievement. Rather I want to bring together a series of mathematical stories that form a part of the early history (or the prehistory) of coding theory. The purposes of the transformation of messages before transmission may be various: to compress the text in order not to send redundant information, or to conceal the sense of the text from an unauthorized user, or to add a few check symbols to correct possible channel errors after the transmission. The theory of errorcorrecting codes deals with the last problem. Let F be a finite set (an alphabet) of size IF[ = q. A (q-ary block) code A of length n is a subset of F". For q a prime power and F = 0:q a finite field, a linear code is a linear subspace of the vector space F n. Codes are designed for the transmission of messages over noisy channels. A channel is defined as a stochastic mapping T : F---* F with the matrix of transition probabilities (p(vlu)), u, v ~ F, where p(v]u) = Pr{v is receivediu is transmitted} (we do not use the most general definition here). Note that we assume that the information transmission channel is memoryless, i.e., the noise affects the letters of a transmitted word statistically independently. Suppose a codeword (a message) a E A is to be transmitted over T letter by letter. Denote by x E F" the received word. To reconstruct a transmitted word from a received one, let us introduce the mapping D : F" -* A called the decoder. The goal of the decoder is to minimize the probability of decoding error, i.e., of an event D(x) # a. It can be shown that if the messages are equiprobable, the error probability is minimized (over all possible decoding rules) by the so-called maximumlikelihood decoder DML defined by the equality

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.