Abstract
Online learning is a fundamental paradigm for learning from continuous data stream. Tradition online learning approaches usually assume that the feature space of data stream is fixed and the incoming instance can always get the true label after making its prediction. However, in many real-world applications, such as the personalized recommender systems, the feature space may keep expanding due to the accumulation of user behaviors. Besides, we may only get bandit feedback, i.e., we only know whether the prediction is correct or not. To solve this important but rarely studied problem, we propose a novel algorithm LIFBF, together with its two variants LIFBF-I and LIFBF-II, to learn from data stream with incremental feature space and bandit feedback. Specifically, when an instance arrives with augmented features, we first utilize the exploration-exploitation strategy to guess its best label, then, a new loss function considering both bandit feedback and guessed label is proposed. Finally, we design a highly dynamic multi-class classifier, which updates the shared and augmented features by adopting the passive-aggressive rule and structural risk minimization principle, respectively. We theoretically analyze the cumulative loss bound of LIFBF. Besides, empirical studies on various datasets further validate the effectiveness of our proposed algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Knowledge and Data Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.