Abstract

Knowledge integration in distributed data mining has received widespread attention that aims to integrate inconsistent information locating on distributed sites. Traditional integration methods become ineffective since they are unable to generate global knowledge, support advanced integration strategy, or make prediction without individual classifiers. In this paper, we propose an argumentation based reinforcement learning method to handle this problem. Inspired by meta-learning, we integrate distributed knowledge and extract meta-knowledge that is agreed-upon knowledge consistent to all the agents. Specifically, two learning stages are introduced: argumentation based learning stage integrates and extracts meta-knowledge, and reinforcement learning stage evaluates and refines meta-knowledge. The two learning stages run alternately to extract global meta-knowledge base, which can be used to make prediction directly. The results from extensive experiments demonstrate that our method can extract refined meta-knowledge with a much satisfied performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call