ABSTRACT The digital number of a given pixel in a satellite image does not only measure the radiance originating from the area of the Earth’s surface represented by this pixel, but also measures radiance originating from surrounding areas represented by adjacent pixels. This adjacency effect, also known as image blurring, is a fundamental limitation of satellite images, and introduces errors in all image processing techniques that quantify the properties of the Earth’s surface on a per-pixel basis. Image deblurring via deconvolution, is a traditional technique to reduce the adjacency effect, which has proven effective in a variety of remote sensing applications. For example, it has provided an almost fourfold reduction of the adjacency error, when using support vector machine classifiers to estimate land cover proportion at a subpixel level. The remote sensing literature usually assumes that satellite images should be deconvolved with a kernel defined by the sensor’s Point Spread Function PSF(x, y), overlooking the results of three seminal empirical studies which systematically demonstrated that the optimum deconvolution kernel is a shrunk PSF of the form PSF(x/β, y/β), where β < 1 is the optimum shrink factor. The current empirical procedure to find the optimum shrink factor is laborious and restrictive since it requires suitable satellite images of much higher resolution, such that their Ground Sampling Distance (GSD) is equal to the GSD of the sensor of interest divided by an integer number much greater than one. A new theoretical procedure that uses synthetic edge images to find the optimum shrink factor is proposed and applied to the same cases considered by the current empirical procedure, showing that the same results are obtained. The new procedure is much simpler to apply and more accurate and versatile, opening a variety of research paths on satellite image deblurring.