In recent years, the escalation in emergency occurrences has underscored the pressing need for expedient responses in delivering essential supplies. Efficient integration and precise allocation of emergency resources under joint government–enterprise stockpiling models are pivotal for enhancing emergency response effectiveness and minimizing economic repercussions. However, current research predominantly focuses on contract coordination and cost-sharing within these joint reserve modes, overlooking significant discrepancies in emergency supply classification standards between government and enterprise sectors, as well as the asymmetry in cross-sectoral and cross-regional supply information. This oversight critically impedes the timeliness and accuracy of emergency supply responses. In practice, manual judgment has been used to match the same materials under differing classification standards between government and enterprise reserves. Still, this approach is inefficient and prone to high error rates. To mitigate these challenges, this study proposes a methodology leveraging the BERT pre-trained language model and TextCNN neural network to establish a robust mapping relationship between these classification criteria. The approach involves abstracting textual representations of both taxonomical classes, generating comparable sentence vectors via average pooling, and calculating cosine similarity scores to facilitate precise classification mapping. Illustrated with China’s Classification and Coding of Emergency Supplies standards and Global Product Classification standards, empirical validation on annotated data demonstrates the BERT-TextCNN model’s exceptional accuracy of 98.22%, surpassing other neural network methodologies such as BERT-CNN, BERT-RNN, BERT-BiLSTM, etc. This underscores the potential of advanced neural network techniques in enhancing emergency supply management across diverse sectors and regions.