With the development of technology, the popularity of online medical treatment is becoming more and more widespread. However, the accuracy and credibility of online medical treatment are affected by model design and semantic understanding. In particular, there are still some problems in the accurate understanding of complex structured texts, which affects the accuracy of judging users’ intentions and needs. Therefore, this paper proposes a new method for medical text parsing, which realizes core tasks such as named entity recognition, intention recognition, and slot filling through a multi-task learning framework; uses BERT to obtain contextual semantic information; and combines BiGRU and BiLSTM neural networks, and uses CRF to realize sequence annotation and DPCNN to realize classification prediction. Thus, the task of entity recognition and intention recognition can be accomplished. On this basis, this paper builds a multi-task learning model based on BiGRU-BiLSTM, and uses CBLUE and CMID databases to verify the method. The verification results show that the accuracy of named entity recognition and intention recognition reaches 86% and 89%, respectively, which improves the performance of various tasks. The ability of the model to process complex text is enhanced. This method can improve the text generalization ability and improve the accuracy of online medical intelligent dialogue when it is used to analyze medical texts.