Buildings are primary locations for human activities and key focuses in the military domain. Rapidly detecting damaged/changed buildings (DCB) and conducting detailed assessments can effectively aid urbanization monitoring, disaster response, and humanitarian assistance. Currently, the tasks of object detection (OD) and change detection (CD) for DCB are almost independent of each other, making it difficult to simultaneously determine the location and details of changes. Based on this, we have designed a cross-task network called SDCINet, which integrates OD and CD, and have created four dual-task datasets focused on disasters and urbanization. SDCINet is a novel deep learning dual-task framework composed of a consistency encoder, differentiation decoder, and cross-task global attention collaboration module (CGAC). It is capable of modeling differential feature relationships based on bi-temporal images, performing end-to-end pixel-level prediction, and object bounding box regression. The bi-direction traction function of CGAC is used to deeply couple OD and CD tasks. Additionally, we collected bi-temporal images from 10 locations worldwide that experienced earthquakes, explosions, wars, and conflicts to construct two datasets specifically for damaged building OD and CD. We also constructed two datasets for changed building OD and CD based on two publicly available CD datasets. These four datasets can serve as data benchmarks for dual-task research on DCB. Using these datasets, we conducted extensive performance evaluations of 18 state-of-the-art models from the perspectives of OD, CD, and instance segmentation. Benchmark experimental results demonstrated the superior performance of SDCINet. Ablation experiments and evaluative analyses confirmed the effectiveness and unique value of CGAC.