In military scenarios it is of crucial importance to observe over long distances, or in other words, to have a large recognition range. Visible light and IR cameras are those mainly used for this task. However, the recognition range—especially for ‘landto-land’ observation (i.e., along horizontal paths at ground/sea level)—is hampered by the optical effects of atmospheric turbulence, which are caused by local variations in the refractive index of air. These in turn are caused by temperature and humidity variations along the observation path. In images, atmospheric turbulence is observed as locally varying blur and beam wander. If we could reduce these effects the recognition range would be extended significantly. Many hardware and software solutions have been developed in the past. Some hardware solutions use adaptive optics1 and are used for astronomical applications. However, these are expensive and have only one mode, that of performing the same processing on the whole image. Software solutions are much cheaper and can be used in combination with existing cameras. However, a lot of these software methods2, 3 also use a global approach (i.e., applying a uniform process across the whole image), whereas for land-to-land observation the effects of atmospheric turbulence are local. We propose a software solution that performs local processing to reduce these effects. Figure 1 shows a flowchart of our method, which can be divided into three parts: first, frame selection to select the sharpest frames; second, local motion compensation to stabilize the observed scene; and third, multi-frame deblurring to increase the resolution and remove the blur in each frame. This processing results in a more stable image, which is more comfortable and less tiresome for the operator to watch. In addition, more details will be visible, because the resolution is enhanced and the image is deblurred. Our solution is flexible and can be implemented in real time. Figure 1. Flowchart of our turbulence compensation method.