Abstract

The emergence of web-based Knowledge Management Systems (KMS) has raised several concerns about the quality of Knowledge Objects (KO), which are the building blocks of knowledge expertise. Web-based KMSs offer large knowledge repositories with millions of resources added by experts or uploaded by users, and their content must be assessed for accuracy and relevance. To improve the efficiency of ranking KOs, two models are proposed for KO evaluation. Both models are based on user interactions and exploit user reputation as an important factor in quality estimation. For the purpose of evaluating the performance of the two proposed models, the algorithms were implemented and incorporated in a KMS. The results of the experiment indicate that the two models are comparable in accuracy, and that the algorithms can be integrated in the search engine of a KMS to estimate the quality of KOs and accordingly rank the results of user searches.

Highlights

  • The ever-increasing volume and diversity of knowledge in Knowledge Management Systems (KMSs) has required users to spend more time searching for the information they need

  • Ranking of knowledge objects (KOs) in search results is based on measurement of the degree of similarity between the query submitted by the user and topics in the knowledge repository, regardless of any consideration of quality [4]

  • The second phase re-ranks search results according to the estimated quality score for each KO

Read more

Summary

Introduction

The ever-increasing volume and diversity of knowledge in Knowledge Management Systems (KMSs) has required users to spend more time searching for the information they need. Searches of such knowledge repositories often yield a large number of results, making it difficult for users to choose items that will meets their requirements [1]-[3]. Some knowledge bases have resorted to the use of expert evaluations These are efficient, they necessarily encompass only a limited number of KOs because of the limited number of experts and the tediousness of manual evaluation [6]. Reputation scores are computed www.ijacsa.thesai.org (IJACSA) International Journal of Advanced Computer Science and Applications, Vol 9, No 1, 2018 according to the quality and quantity of contributions made by individual users [10]

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.