When a strong earthquake occurs, roads are the lifelines of rescue. The rapid development of high-resolution satellite imaging platforms has made the application of remote sensing technology for road damage identification possible. Over the years, road damage identification has required a significant amount of manual involvement, making it difficult to meet the needs of rapid post-disaster response. The automatic recognition of road damage using satellite images has always been difficult. Damaged areas appear in the satellite images with blurry boundaries, versatile sizes, and uneven spatial distributions. With the aim of automatic pixel-level road damage identification, we introduce the first road damage dataset, CAU-RoadDamage, which includes high-resolution satellite images and pixel-level human annotations. Moreover, we propose the application of a pre-trained vision foundation model for the first time to automatically identify road damage. Low-rank adaptation technology is used to fine-tune the foundation model on the satellite images, and two-way attention is used to integrate the foundation model with domain specialist model components. The proposed segmentation model is compared to multiple state-of-the-art methods on the CAU-RoadDamage dataset. Our approach achieves the highest F1 of 76.09%, which is notably higher than that of the other models. The experimental results demonstrate the feasibility of pixel-level road damage recognition and the applicability of vision foundation models for downstream remote sensing tasks. The CAU-RoadDamage dataset will be made publicly available at https://github.com/CAU-HE/RoadDamageExtraction.