Embedding representations are a popular approach for modeling users and items in recommender systems, e.g., matrix factorization, two-tower models, or autoencoders, in which items and users are embedded in a low-dimensional, dense embedding space. On the other hand, there are methods that model high-dimensional relationships between items, most notably item-based collaborative filtering (CF), which is based on an item-to-item similarity matrix. Item-based CF was proposed over two decades ago and gained new interest through new learning methods in the form of SLIM [ 18 ] and, most recently, EASE [ 25 ]. In this work, we rephrase traditional item-based CF as sparse user encoders in which the user encoder is an (arbitrary) function and the item representation is learned. Item-based CF is a special case in which the sparse user encoding is the one-hot encoding of a user’s history. In contrast to typical dense user/item encoder models, this work targets high-dimensional and sparse user encoders. The core contribution is an efficient closed-form learning algorithm that can solve arbitrary sparse user encoders. Several applications of this algorithm are presented, including higher-order encoders, hashed encoders, and feature-based encoders.