Abstract
Digital data objects increasingly take the form of a non-textual nature, and the effective retrieval of these objects using their intrinsic contents largely depends on the underlying indexing mechanism. Since current multimedia objects are created with ever-increasing speed and ease, they often form the bulk of the data contents in large data repositories. In this study, we provide an effective automatic indexing mechanism based on learning reinforcement by systematically exploiting the big data obtained from different user interactions. Such human interaction with the search system is able to encode the human intelligence in assessing the relevance of a data object against user retrieval intentions and expectations. By methodically exploiting the big data and learning from such interactions, we establish an automatic indexing mechanism that allows multimedia data objects to be gradually indexed in the normal course of their usage. The proposed method is especially efficient for the search of multimedia data objects such as music, photographs and movies, where the use of straightforward string-matching algorithms are not applicable. The method also permits the index to respond to change in relation to user feedback, which at the same time avoids the system landing in a local optimum. Through the use of the proposed method, the accuracy of searching and retrieval of multimedia objects and documents may be significantly enhanced.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have