Recent deep neural network architectures that are tailored to tabular data operate at the feature level and process multiple latent representations simultaneously, typically one per feature. We investigate the impact of varying the dimension and number of such latent representations on model performance and generalization. Our results identify distinct model behaviors during both training and testing phases. To ease analysis of these behaviors, we propose a novel tool for characterizing data complexity and use it to highlight intricate relationships between data complexity, model complexity and model performance. We hypothesize a phenomenon of implicit self-regularization which intensifies with model capacity and sample-to-dimension ratio. While this self-regularization can mitigate over-fitting, it may also lead to reduced performance on training data. Our findings expand the understanding of neural networks applied to tabular data and provide insights that can help practitioners and/or automated methods in designing neural networks architectures that better match the complexity of specific tabular data sets.