Abstract

Group testing is the process of pooling arbitrary subsets from a set of $ {n}$ items so as to identify, with a minimal number of tests, a “small” subset of $ {d}$ defective items. In “classical” non-adaptive group testing, it is known that when $ {d}$ is substantially smaller than $ {n}$ , $\Theta ( {d}\log ( {n}))$ tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature that meet this bound require most items to be tested $ {\Omega }(\log ( {n}))$ times, and most tests to incorporate $ {\Omega }({{n/d}})$ items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be “sparse.” Specifically, we consider (separately) scenarios in which 1) items are finitely divisible and hence may participate in at most $ {\gamma } \in {o}(\log ( {n}))$ tests; or 2) tests are size-constrained to pool no more than $\rho \in {o}({{n/d}})$ items per test. For both scenarios, we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In particular, one of our main results shows that $ {\gamma }$ -finite divisibility of items forces any non-adaptive group testing algorithm with the probability of recovery error at most $ {\epsilon }$ to perform at least $ {\gamma } {d}({ {n/d}})^{({1}-{5} {\epsilon })/ {\gamma }}$ tests. Analogously, for $ {\rho }$ -sized constrained tests, we show an information-theoretic lower bound of $ {\Omega }( {n}/ {\rho })$ tests for high-probability recovery–hence in both settings the number of tests required grows dramatically (relative to the classical setting) as a function of $ {n}$ . In both scenarios, we provide both randomized constructions and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that is optimal up to constant or small polynomial factors in some regimes of ${{n, d,}} {\gamma }$ , and $ {\rho }$ . The randomized design/reconstruction algorithm in the $ {\rho }$ -sized test scenario is universal –independent of the value of $ {d}$ , as long as $ {\rho } \in {o}({\textbf {n/d}})$ . We also investigate the effect of unreliability/noise in test outcomes, and show that whereas the impact of noise in test outcomes can be obviated with a small (constant factor) penalty in the number of tests in the $ {\rho }$ -sized tests scenario, there is no group-testing procedure, regardless of the number of tests, that can combat noise in the $ {\gamma }$ -divisible scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call