Evaluating the complexity of algorithms based only on the possibility of the worst possible variant of the input data is often not justified. The development of algorithms that would predictably work quickly on all possible inputs is of practical importance. If for the problem there is a reasonable opportunity to model the distributions of input values, then you can use probabilistic analysis as a method of developing effective algorithms. When the information about the distribution of input values is not enough for their numerical modeling, algorithms are developed by giving a part of the algorithm itself a random character - randomized algorithms. The use of randomization ensures the operation of the algorithm with minimal needs to store internal states and events in the past, and the algorithms themselves look compact. The paper studies problems for which there are relatively effective deterministic algorithms for solving. But, as will be shown, the construction of appropriate randomized algorithms leads to effective and efficient parallel computing schemes with linear complexity on average. The advantages of randomization are especially evident in the case of large computer systems and communication networks that function without coordination and centralization. Examples of such distributed systems are, in particular, networks of currently popular cryptocurrencies. The use of randomized heuristics allows the system to adapt to changing operating conditions and minimizes the likelihood of conflicts between processes. The paper shows the advantages of using a randomized algorithm over deterministic algorithms for the problem of routing in a network with a hypercube topology. A theorem on estimating the expected number of steps required by Valiant's randomized algorithm to deliver all messages to an address is proved. The expected linear complexity of Valiant's algorithm is a direct consequence of the proven theorem.
Read full abstract