Abstract

Regulations now mandate data-driven systems, e.g., recommender systems, to empower users to delete private individual data. This prompts the crucial unlearning of data from machine learning models, spotlighting the understudied machine unlearning problem. Despite the widespread usage of machine learning models in modern-day recommender systems, unlearning in this context lacks attention. Existing unlearning methods fall short in the preservation of collaborative information across users and items. To bridge this gap, we propose LASER: a model-agnostic erasable recommendation framework. LASER partitions data into disjoint and balanced shards using hypergraph-based embeddings. Sequential training on these shards, augmented by curriculum learning, LASER sufficiently preserves collaborative information and refines model utility. To address the inefficiency of sequential training, we integrate early stopping and parameter manipulation. Our theoretical analyses and real-world dataset experiments validate LASER’s effectiveness. It enables efficient unlearning while outperforming state-of-the-art models in preserving model utility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call