Monte Carlo methods have proved indispensible for many modern statistical applications—particularly within Bayesian statistics. When analysing complex models, the intractability of likelihoods and posterior distributions requires the use of numerical approximations, and, particularly in high dimensions, Monte Carlo approximation is often the approach of choice. The most commonly used Monte Carlo method is Markov chain Monte Carlo (MCMC) and reversible jump MCMC (Green 1995), but important alternatives include sequential Monte Carlo (Doucet et al. 2000), population Monte Carlo (Cappe et al. 2004) and Importance Sampling (see Fearnhead 2008, for more details of alternatives to MCMC). In all cases, the efficiency of a given Monte Carlo method will depend on how it is implemented. In many cases, theoretical aspects of the methods are well understood, and can be used to guide implementation. For example much is known about optimal acceptance rates for various MCMC algorithms (Roberts and Rosenthal 2001)—however using such information to tune an algorithm by hand can be very time-consuming. This special issues focusses on a class of Monte Carlo methods that “tune themselves”. The algorithms adapt as they are run, and hence are called adaptive Monte Carlo methods. The papers in this issue review such methods, suggest new adaptive Monte Carlo algorithms, and include case-studies demonstrating their efficiency. One common theme is the simplicity of implementing many adaptive Monte Carlo methods, and one hope of this special issue is that it will help encourage their use.