AbstractMany real-world datasets are represented as tensors, i.e., multi-dimensional arrays of numerical values. Storing them without compression often requires substantial space, which grows exponentially with the order. While many tensor compression algorithms are available, many of them rely on strong data assumptions regarding its order, sparsity, rank, and smoothness. In this work, we propose TensorCodec, a lossy compression algorithm for general tensors that do not necessarily adhere to strong input data assumptions.TensorCodec incorporates three key ideas. The first idea is neural tensor-train decomposition (NTTD) where we integrate a recurrent neural network into Tensor-Train Decomposition to enhance its expressive power and alleviate the limitations imposed by the low-rank assumption. Another idea is to fold the input tensor into a higher-order tensor to reduce the space required by NTTD. Finally, the mode indices of the input tensor are reordered to reveal patterns that can be exploited by NTTD for improved approximation. In addition, we extend TensorCodec to enable the lossy compression of tensors with missing entries, often found in real-world datasets. Our analysis and experiments on 8 real-world datasets demonstrate that TensorCodec is (a) Concise: it gives up to $$7.38 \times $$ 7.38 × more compact compression than the best competitor with similar reconstruction error, (b) Accurate: given the same budget for compressed size, it yields up to $$3.33\times $$ 3.33 × more accurate reconstruction than the best competitor, (c) Scalable: Its empirical compression time is linear in the number of tensor entries, and it reconstructs each entry in logarithmic time. Our code and datasets are available at https://github.com/kbrother/TensorCodec.