Abstract

Formula recognition is widely used in document intelligent processing, which can significantly shorten the time for mathematical formula input, but the accuracy of traditional methods could be higher. In order to solve the complexity of formula input, an end-to-end encoder-decoder framework with an attention mechanism is proposed that converts formulas in pictures into LaTeX sequences. The Vision Transformer (VIT) is employed as the encoder to convert the original input picture into a set of semantic vectors. Due to the two-dimensional nature of mathematical formula, in order to accurately capture the formula characters’ relative position and spatial characteristics, positional embedding is introduced to ensure the uniqueness of the character position. The decoder adopts the attention-based Transformer, in which the input vector is translated into the target LaTeX character. The model adopts joint codec training and Cross-Entropy as a loss function, which is evaluated on the im2latex-100k dataset and CROHME 2014. The experiment shows that BLEU reaches 92.11, MED is 0.90, and Exact Match(EM) is 0.62 on the im2latex-100k dataset. This paper’s contribution is to introduce machine translation to formula recognition and realize the end-to-end transformation from the trajectory point sequence of formula to latex sequence, providing a new idea of formula recognition based on deep learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call