Semantic segmentation necessitates approaches that learn high-level characteristics while dealing with enormous quantities of data. Convolutional neural networks (CNNs) can learn unique and adaptive features to achieve this aim. However, due to the large size and high spatial resolution of remote sensing images, these networks cannot efficiently analyze an entire scene. Recently, deep transformers have proven their capability to record global interactions between different objects in the image. In this paper, we propose a new segmentation model that combines convolutional neural networks with transformers, and show that this mixture of local and global feature extraction techniques provides significant advantages in remote sensing segmentation. In addition, the proposed model includes two fusion layers that are designed to efficiently represent multimodal inputs and output of the network. The input fusion layer extracts feature maps summarizing the relationship between image content and elevation maps (DSM). The output fusion layer uses a novel multitask segmentation strategy where class labels are identified using class-specific feature extraction layers and loss functions. Finally, a fast-marching method is used to convert unidentified class labels to their closest known neighbors. Our results demonstrate that the proposed method improves segmentation accuracy compared to state-of-the-art techniques.