The object of this study is the process of removing cloudiness on optical space images. Solving the cloudiness removal task is an important stage in processing data from the Earth remote probing (ERP) aimed at reconstructing the information hidden by these atmospheric disturbances. The analyzed shortcomings in the fusion of purely optical data led to the conclusion that the best solution to the cloudiness removal problem is a combination of optical and radar data. Compared to conventional methods of image processing, neural networks could provide more efficient and better performance indicators due to the ability to adapt to different conditions and types of images. As a result, a generative adversarial network (GAN) model with cyclic-sequential 7-ResNeXt block architecture was constructed for cloud removal in optical space imagery using synthetic aperture radar (SAR) imagery. The model built generates fewer artifacts when transforming the image compared to other models that process multi-temporal images. The experimental results on the SEN12MS-CR data set demonstrate the ability of the constructed model to remove dense clouds from simultaneous Sentinel-2 space images. This is confirmed by the pixel reconstruction of all multispectral channels with an average RMSE value of 2.4 %. To increase the informativeness of the neural network during model training, a SAR image with a C-band signal is used, which has a longer wavelength and thereby provides medium-resolution data about the geometric structure of the Earth's surface. Applying this model could make it possible to improve the situational awareness at all levels of control over the Armed Forces (AF) of Ukraine through the use of current space observations of the Earth from various ERP systems