Abstract

Modern advances in microelectronics and the widespread use of computer technology for the purpose of software implementation of various telecommunication and information devices are a reliable justification for the representation of messages from various sources in digital form, that is, in the form of appropriate code combinations, which are essentially numbers in a certain number system. Considering the effective primary coding of messages from the source, they strive to achieve, first of all, the coding rate as close as possible to the entropy of the source, which was proved in the well-known theorems of Claude Shannon, which substantiated this position regarding the minimum value of the average length of the code combination. Since different messages from the source appear, as a rule, with different probabilities, it is possible to achieve the minimum coding rate using non-uniform coding methods, for example, according to the algorithm proposed by David Huffman. In an effort to achieve the minimum encoding rate, one should also take into account the complexity of the implementation of the primary encoding device, take into account the number of elements that will be required for its practical implementation, and so on. Based on this, the effectiveness of one or another method of primary coding is determined not by one, but by a number of factors that often contradict each other. One of these factors is the choice of the code base, which makes it possible to optimize the costs of the practical implementation of the primary coding method, which was the subject of consideration in this paper

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call