Hyperspectral image (HSI) provides rich spectral–spatial information and the light detection and ranging (LiDAR) data reflect the elevation information, which can be jointly exploited for better land-cover classification. However, due to different imaging mechanisms, HSI and LiDAR data always present significant image difference, current pixel-wise feature fusion classification methods relying on concatenation or weighted fusion are not effective. To achieve accurate classification result, it is important to extract and fuse similar high-order semantic information and complementary discriminative information contained in multimodal data. In this paper, we propose a novel coupled adversarial learning based classification (CALC) method for fusion classification of HSI and LiDAR data. In specific, a coupled adversarial feature learning (CAFL) sub-network is first trained, to effectively learn the high-order semantic features from HSI and LiDAR data in an unsupervised manner. On one hand, the proposed CAFL sub-network establishes an adversarial game between dual generators and discriminators, so that the learnt features can preserve detail information in HSI and LiDAR data, respectively. On the other hand, by designing weight-sharing and linear fusion structure in the dual generators, we can simultaneously extract similar high-order semantic information and modal-specific complementary information. Meanwhile, a supervised multi-level feature fusion classification (MFFC) sub-network is trained, to further improve the classification performance via adaptive probability fusion strategy. In brief, the low-level, mid-level and high-level features learnt by the CAFL sub-network lead to multiple class estimation probabilities, which are then adaptively combined to generate a final accurate classification result. Both the CAFL and MFFC sub-networks are collaboratively trained by optimizing a designed joint loss function, which consists of unsupervised adversarial loss and supervised classification loss. Overall, by optimizing the joint loss function, the proposed CALC network is pushed to learn highly discriminative fusion features from multimodal data, leading to higher classification accuracies. Extensive experiments on three well-known HSI and LiDAR data sets demonstrate the superior classification performance by the proposed CALC method than several state-of-the-art methods. The source code of the proposed method will be made publicly available at https://github.com/Ding-Kexin/CALC.