Abstract

In this paper an extensive investigation of the recent status on evaluating reliability of two-terminal networks (i.e., from an input or source S to an output or terminus T) is presented. The algorithms for evaluating the reliability for this type of network can be divided into exact and approximate ones, the approximate ones covering upper and lower bounds as well. The exact algorithms have relied, in their early stages, on the inclusion-exclusion principle applied to the union of all paths from S to T. Using directly the inclusion-exclusion principle however turns out to be toilsome, in the sense that it requires checking a number of cases depending exponentially on the number of paths from S to T. Other examples or categories of exact algorithms include: minimal path/cut enumeration, binary tree methods, brute-force enumeration, decomposition/factoring, reductions and decompositions, and sum of disjoint products. In the case of large networks such algorithms cannot be applied. After establishing theoretically that the reliability of two-terminal networks is #P-complete, attention was drawn on approximate algorithms. Here some examples are: Monte Carlo, randomly seeded genetic algorithms, efficient binary tree algorithms (heuristically searching for the most important minimal cuts), as well as many upper and lower bounds. Moore and Shannon introduced in the mid-50's a particular class of two-terminal networks, known as hammock networks, which we shall use as an example. Hammock networks are potentially important to applications in nanoelectronics as well as in biology. We conclude by arguing that radically out-of-the-box approximations and/or bounds are needed if we want to progress towards analyzing biological-sized two-terminal networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call