Generative adversarial networks (GAN) opened new possibilities for image processing and analysis. In- painting, dataset augmentation using artificial samples, or increasing spatial resolution of aerial imagery are only a few notable examples of utilising GANs in remote sensing (RS). The normalised difference vegetation index (NDVI) ground-truth labels were prepared by combining RGB and NIR orthophotos. The dataset was then utilised as input for a conditional generative adversarial network (cGAN) to perform an image-to-image translation. The main goal of the neural network was to generate an artificial NDVI image for each processed 256 px × 256 px patch using only in- formation available in the panchromatic input. The network achieved a structural similarity index measure (SSIM) of 0.7569 ± 0.1083, a peak signal-to-noise ratio (PSNR) of 26.6459 ± 3.6577 and a root-mean-square error (RSME) of 0.0504 ± 0.0193 on the test set, which should be considered high. The perceptual evaluation was performed to verify the meth- od’s usability when working with a real-life scenario. The research confirms that the structure and texture of the pan- chromatic aerial RS image contain sufficient information for NDVI estimation for various objects of urban space. Even though these results can highlight areas rich in vegetation and distinguish them from the urban background, there is still room for improvement regarding the accuracy of the estimated values. The research aims to explore the possibility of utilising GAN to enhance panchromatic images (PAN) with information related to vegetation. This opens exciting opportunities for historical RS imagery processing and analysis.