AbstractIn this work, we present a novel approach to lexical complexity prediction (LCP) that combines diverse linguistic features with encodings from deep neural networks. We explore the integration of 23 handcrafted linguistic features with embeddings from two well-known language models: BERT and XLM-RoBERTa. Our method concatenates these features before inputting them into various machine learning algorithms, including SVM, Random Forest, and fine-tuned transformer models. We evaluate our approach using two datasets: CompLex for English (a high-resource language) and CLexIS2 for Spanish (a relatively low-resource language in ), allowing us to study performance issues from a cross-lingual perspective. Our experiments involve different combinations of linguistic features with encodings from pretrained deep learning models, testing both token-based and sequence-related encodings. The results demonstrate the effectiveness of our hybrid approach. For the English CompLex corpus, our best model achieved a mean absolute error (MAE) of 0.0683, representing a 29.2% improvement over using linguistic features alone (MAE 0.0965). On the Spanish CLexIS2 corpus, we achieved an MAE of 0.1323, a 19.4. These findings show that handcrafted linguistic features play a fundamental role in achieving higher performance, particularly when combined with deep learning approaches. Our work suggests that hybrid approaches should be considered over full end-to-end solutions for LCP tasks, especially in multilingual contexts.
Read full abstract