Abstract
Regulations now mandate data-driven systems, e.g., recommender systems, to empower users to delete private individual data. This prompts the crucial unlearning of data from machine learning models, spotlighting the understudied machine unlearning problem. Despite the widespread usage of machine learning models in modern-day recommender systems, unlearning in this context lacks attention. Existing unlearning methods fall short in the preservation of collaborative information across users and items. To bridge this gap, we propose LASER: a model-agnostic erasable recommendation framework. LASER partitions data into disjoint and balanced shards using hypergraph-based embeddings. Sequential training on these shards, augmented by curriculum learning, LASER sufficiently preserves collaborative information and refines model utility. To address the inefficiency of sequential training, we integrate early stopping and parameter manipulation. Our theoretical analyses and real-world dataset experiments validate LASER’s effectiveness. It enables efficient unlearning while outperforming state-of-the-art models in preserving model utility.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.