With the rapid development of machine learning, challenging question and answer datasets have also emerged, and the machine reading comprehension technology has emerged. Traditional machine reading comprehension methods mostly focus on the understanding word level semantics, with the weak ability to extract logical relationships from text, resulting in the lower ability of logical reasoning. In order to strengthen the ability of machine reading comprehension method to extract the logical relationship of text and the ability of logical reasoning, a neural symbol model based on logical reasoning was proposed, and the logical expressions captured by the neural symbol model were converted into text input and trained in a mixed reasoning reading comprehension model based on symbolic logic. The mixed reasoning reading comprehension model based on symbolic logic is different from the traditional machine reading comprehension model. It uses symbolic definition and logical capture to extract logical symbols and generate logical expressions. The research results show that the accuracy and F-measure values of the neural symbol model based on the logical reasoning are 70.08% and 70.05%, respectively, when the training set sample size is 4000. The accuracy of the mixed reasoning reading comprehension model based on symbolic logic in the logical reasoning data set of the standard postgraduate entrance examination is 88.31%, which is higher than the 58.74% of the language perception map network model. The accuracy rate in the four-choice and one-choice question-and-answer data set is 40.92%, which is 1.58% higher than that of the language awareness graph network model. In summary, the neural symbol model and hybrid inference reading comprehension model proposed in the study have superior performance, which can capture the logical relationship of text in data sets well, improve the model feature abstraction and reasoning ability, effectively shorten the training time and improve the model efficiency.
Read full abstract