This chapter discusses the general theory of optimal stopping for Brownian motion. It explains why many sequential decision problems in discrete time lead to stopping problem when they are formulated in continuous time. The solution of the limiting form of the problem can be constructed, in principle, by taking the union of certain open sets and this approach is closely related to the comparison technique. The chapter describes several Markov processes, each of which is capable of producing a sequence of rewards, and the problem of choosing just one of them at each stage so as to maximize the total discounted expectation. Each separate process has an index, depending on its state, and the optimal allocation policy is then defined by choosing the process with highest index at every stage. There is a strong connection between multi-armed bandits and stopping problems because the index for a single Markov process must be determined by solving a family of stopping problems.