Abstract

Language models’ growing role in natural language processing neces- sitates a deeper understanding of their linguistic knowledge. Linguistic probing tasks have become crucial for model explainability, designed to evaluate models’ understanding of vari-ous linguistic phenomena. Objective: This systematic review critically assesses the linguistic knowledge of language models via linguistic probing, providing a comprehensive overview ofthe understood linguistic phenomena and identifying future research areas. Method: We performed an extensive search of relevant academic databases and analyzed 57 articles pub- lished between October 2018 and October 2022. Results: While language models exhibit extensive linguistic knowledge, limitations persist in their comprehension of specific phe- nomena. The review also points to a need for consensus on evaluating language models’ linguistic knowledge and the linguistic terminology used. Conclusion: Our review offers an extensive look into linguistic knowledge of language models through linguistic probing tasks. This study underscores the importance of understanding these models’ linguistic capabilities for effective use in NLP applications and for fostering more explainable AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call