ABSTRACT In the organization, they purchased goods or services from different suppliers and used invoice documents to confirm the payment. The invoice documents contained information that can be used for a business decision but, the process of information extraction required many resources to collect the data. The traditional way used template matching-based methods. This process identifies the parts on an image that match a predefined template and requires new manual annotation when processing the new image layout. Therefore, developing a system for robustly extracting entities from different layouts of invoices is necessary. Existing research applied deep learning and Name Entity Recognition (NER) for information extraction but, extracting invoice information was widely done in English and Chinese languages. In this study, we constructed a deep learning model using BiLSTM-CRF (Bidirectional Long Short-Term Memory-Conditional Random Fields) with word and character embedding for information extraction from different layouts of Thai invoice images. The model was evaluated by Semantic Evaluation at a full named-entity level. Our experimental results showed that this method can achieve a precision of 0.9557, recall of 0.9486, and F1-score of 0.9521 for the partial match; precision of 0.9329, recall of 0.9259, and F1-score of 0.9294 for the exact match and the result of the F1-score was significantly influenced by the quality of images and text result from Optical Character Recognition (OCR). Abbreviations: BERT: bidirectional encoder representations from transformers; BiLSTM: bidirectional long short-term memory; COR: correct; CRF: conditional random fields; CV: computer vision; ELMO: embeddings from language model; INC: incorrect; MIS: missing; MSE: mean squared error; MUC: message understanding conference; NER: named entity recognition; NLP: natural language processing; OCR: optical character recognition; PAR: partial; SemEval: semantic evaluation; SPU: spurius