We study the relation between <i>latency</i> and <i>alphabet size</i> in the context of Multicast Network Coding. Given a graph <inline-formula> <tex-math notation="LaTeX">$G = (V, E)$ </tex-math></inline-formula> representing a communication network, a subset <inline-formula> <tex-math notation="LaTeX">$S \subseteq V$ </tex-math></inline-formula> of sources, each of which initially holds a set of information messages, and a set <inline-formula> <tex-math notation="LaTeX">$T \subseteq V$ </tex-math></inline-formula> of terminals; we consider the problem in which one wishes to design a communication scheme that eventually allows all terminals to obtain all the messages held by the sources. In this study we assume that communication is performed in rounds, where in each round each network node may transmit a single (possibly encoded) information packet on any of its outgoing edges. The objective is to minimize the communication latency, i.e., number of communication rounds needed until all terminals have all the messages of the source nodes. For sufficiently large alphabet sizes (i.e., large block length, packet sizes), it is known that traditional linear multicast network coding techniques (such as random linear network coding) minimize latency. In this work we seek to study the task of minimizing latency in the setting of limited alphabet sizes (i.e., finite block length), and alternatively, the task of minimizing the alphabet size in the setting of bounded latency. We focus on the establishing the computation complexity of the problem and present several intractability results. In particular, through reductive arguments, we prove that it is NP-hard to (i) approximate (and in particular to determine) the minimum alphabet size given a latency constraint; (ii) approximate (and in particular to determine) the minimum latency of communication schemes in the setting of limited alphabet sizes.
Read full abstract