The Metropolis algorithm and its variants are perhaps the most widely used methods of generating Markov chains with a specified equilibrium distribution. We study an extension of the Metropolis algorithm from both a decision theoretic and rate of convergence point of view. The decision theoretic approach was first taken by Peskun (1973) who showed some optimality properties of the classical Metropolis sampler. In this article, we propose an extension of the Metropolis algorithm which reduces the asymptotic variance and accelerates the convergence rate of its classic form. The principle method used to improve the properties of a sampler is to move mass from the diagonal elements of the Markov chain's transition matrix to the off-diagonal elements. A low dimensional example is given to illustrate that our extended algorithm converges to stationary distribution in the fastest possible order n steps, while the conventional Metropolis chain takes at least order n 2log(n) steps.