In the context of global climate change and rising anthropogenic loads, outbreaks of both endemic and invasive pests, pathogens, and diseases pose an increasing threat to the health, resilience, and productivity of natural forests and forest plantations worldwide. The effective management of such threats depends on the opportunity for early-stage action helping to limit the damage expand, which is difficult to implement for large territories. Recognition technologies based on the analysis of Earth observation data are the basis for effective tools for monitoring the spread of degradation processes, supporting pest population control, forest management, and conservation strategies in general. In this study, we present a machine learning-based approach for recognizing damaged forests using open source remote sensing images of Sentinel-2 supported with Google Earth data on the example of bark beetle, Polygraphus proximus Blandford, polygraph. For the algorithm development, we first investigated and annotated images in channels corresponding to natural color perception—red, green, and blue—available at Google Earth. Deep neural networks were applied in two problem formulations: semantic segmentation and detection. As a result of conducted experiments, we developed a model that is effective for a quantitative assessment of the changes in target objects with high accuracy, achieving 84.56% of F1-score, determining the number of damaged trees and estimating the areas occupied by withered stands. The obtained damage masks were further integrated with medium-resolution Sentinel-2 images and achieved 81.26% of accuracy, which opened the opportunity for operational monitoring systems to recognize damaged forests in the region, making the solution both rapid and cost-effective. Additionally, a unique annotated dataset has been collected to recognize forest areas damaged by the polygraph in the region of study.