Abstract This paper presents an innovational way of assimilating observations of clouds into the icosahedral nonhydrostatic weather forecasting model for regional scale (ICON-D2), which is operated by the German Weather Service (Deutscher Wetterdienst) (DWD). A convolutional neural network (CNN) is trained to detect clouds in camera photographs. The network’s output is a grayscale picture, in which each pixel has a value between 0 and 1, describing the probability of the pixel belonging to a cloud (1) or not (0). By averaging over a certain box of the picture a value for the cloud cover of that region is obtained. A forward operator is built to map an ICON model state into the observation space. A three-dimensional grid in the space of the camera’s perspective is constructed and the ICON model variable cloud cover (CLC) is interpolated onto that grid. The maximum CLC along the rays that fabricate the camera grid, is taken as a model equivalent for each pixel. After superobbing, monitoring experiments have been conducted to compare the observations and model equivalents over a longer time period, yielding promising results. Further we show the performance of a single assimilation step as well as a longer assimilation experiment over a time period of 6 days, which also yields good results. These findings are proof of concept and further research has to be invested before these new innovational observations can be assimilated operationally in any numerical weather prediction (NWP) model.