Introduction In the classical ruin problem, a gambler with initial capital i dollars plays against an adversary with initial capital a i dollars. Here i and a are positive integers, i < a. The gambler wins a dollar with probability p and loses a dollar with probability q = 1 p; the game is repeated until one of the players goes broke. (Part of the analysis of the problem shows the game cannot go on forever.) Another interpretation of this model is that of a gambler playing a game in a casino (the adversary) where the gambler plays until she goes broke or until she wins a fixed predetermined amount, at which time she quits. Given a total fixed capital a, we are interested in the probability qi that the gambler starting with initial capital i, 1 < i < a 1, is ruined. This problem is often discussed in a first course in probability and introduces the student to the ideas of random walks and Markov chains. Our main reference is the classic book of Feller [3, chapter 14]; see also [1], [4], and [5]. We will regard the sequence of the gambler's fortune after each play as a random walk in the initerval [0, a], with absorbing barriers at 0 and a. The probability of ruin is the probability of hitting 0 before hitting a. Using a difference equation approach and some algebra, the following solution is derived in the case p = 1/2 (see, e.g., [3]):