The notion of timeout (i.e., the maximal time to wait before retrying an action) occurs in many networking contexts. Use of timeouts is encountered especially in large-scale networks, where negative acknowledgments (NACKs) on failures have significantly higher delays than positive acknowledgments (ACKs) and frequently are not employed at all. Selection of a proper timeout involves a tradeoff between waiting too long and loading the network needlessly by waiting too little. The common approach is to set the timeout to a large value, such that, unless the action fails, it is acknowledged within the timeout duration with a high probability. This approach leads to overly long, far from optimal, timeouts. Our quantitative approach has the purpose of computing and studying the optimal timeout strategy. The tradeoff is modeled by introducing a "cost" per unit time (until success) and a "cost" per repeated attempt. The optimal strategy is then defined as one that a selfish user would follow to minimize its expected cost. We discuss various practical interpretations of these costs. We then derive formulas for the optimal timeout values and study some of their fundamental properties. We identify the worthwhile conditions for making parallel attempts from the outset. We also demonstrate a striking property of positive feedback and study the interaction resulting when many users selfishly apply the optimal timeout strategy; we use a noncooperative game model and show that it suffers from an inherent instability problem. Some implications of these results on network design are discussed.