Abstract

Currently, web document repositories have been collaboratively created and edited. One of these repositories, Wikipedia, is facing an important problem: assessing the quality of Wikipedia. Existing approaches exploit techniques such as statistical models or machine leaning algorithms to assess Wikipedia article quality. However, existing models do not provide satisfactory results. Furthermore, these models fail to adopt a comprehensive feature framework. In this article, we conduct an extensive survey of previous studies and summarize a comprehensive feature framework, including text statistics, writing style, readability, article structure, network, and editing history. Selected state‐of‐the‐art deep‐learning models, including the convolutional neural network (CNN), deep neural network (DNN), long short‐term memory (LSTMs) network, CNN‐LSTMs, bidirectional LSTMs, and stacked LSTMs, are applied to assess the quality of Wikipedia. A detailed comparison of deep‐learning models is conducted with regard to different aspects: classification performance and training performance. We include an importance analysis of different features and feature sets to determine which features or feature sets are most effective in distinguishing Wikipedia article quality. This extensive experiment validates the effectiveness of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call