Abstract

Named-Entity Recognition (NER) is used to extract information from text by identifying entities such as the name of the person, organization, location, time, and other entities. Recently, machine learning approaches, particularly deep-learning, are widely used to recognize patterns of entities in sentences. Embedding, a process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. The embedding process can be performed using the supervised learning method, which requires a large number of labeled data sets or an unsupervised learning approach. This study compares the two embedding methods; trainable embedding layer (supervised learning) and pre-trained word embedding (unsupervised learning). The trainable embedding layer uses the embedding layer provided by the Keras library while pre-trained word embedding uses word2vec, GloVe, and fastText to build NER using the BiLSTM architecture. The results show that GloVe had better performance than other embedding techniques with a micro average f1 score of 76.48.

Highlights

  • ABSTRAK Named-Entity Recognition (NER) digunakan untuk mengekstrak informasi dari teks dengan cara mengidentifikasi entitas seperti nama orang, organisasi, lokasi, waktu, dan entitas lainnya

  • A process to convert text data into a number or vector of numbers, translates high dimensional vectors into relatively low-dimensional space. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words

  • The trainable embedding layer uses the embedding layer provided by the Keras library while pre-trained word embedding uses word2vec, GloVe, and fastText to build NER using the BiLSTM architecture

Read more

Summary

HASIL DAN PEMBAHASAN

Dari gabungan dua sumber data set yang digunakan diperoleh total 4,892 kalimat, 98,122 kata. Data set yang digunakan sangat tidak seimbang jumlah antar labelnya dimana yang paling banyak adalah token label O dengan 83,35% dari data set. Gambar 4 dan Gambar 5 menjelaskan learning curve proses pelatihan model yang memiliki panjang sequence paling optimal sesuai dengan perolehan nilai f1 score pada Tabel 2. Model yang menggunakan GloVe dan fastText cenderung lebih tahan terhadap overfit dibandingkan dengan trainable dan word2vec embedding. Tabel 2 menunjukkan hasil evaluasi model menggunakan nilai hiper-parameter awal yang sama untuk setiap panjang sequence yang berbeda. Trainable dan word2vec embedding memiliki nilai f1 score tertinggi pada panjang sequence 40 token, sedangkan GloVE dan fastText pada 60 token. Gambar 6 adalah confusion matrix hasil evaluasi menggunakan data uji pada model dengan perolehan nilai f1 score tertinggi. Beberapa kesalahan prediksi terjadi pada entitas dan label yang sama, seperti I-ORGANIZATION diprediksi 13 kali sebagai B-ORGANIZATION, IORGANIZATION diprediksi 13 kali sebagai L-ORGANIZATION, dan U-LOCATION diprediksi 12 kali sebagai U-ORGANIZATION

Score word2vec
Dropout
Findings
KESIMPULAN DAN SARAN
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.