Emerging technologies based on resistive switching (ReRAM) devices promise to improve the speed and energy efficiency of next generation machine learning accelerators, but further research is required for achieving commercial maturity. System-level prototyping with emerging devices is costly, and algorithmic investigations require hardware neural network modeling which often deviates from experimental reality. In this work, the concept of modeling bias is proposed as a way to quantify this deviation and support reliable evaluation of device populations in the context of neural network algorithms. While applicable to other device modeling techniques, modeling bias is investigated here using jump tables - a promising physics-less technique to model emerging memory devices for hardware networks. Questions about the fidelity of these tables in relation to stochastic device behavior are answered. Two methods of jump table modeling – binning and a novel Optuna-optimized binning - are explored using synthetic data with known distributions for benchmarking and experimental data obtained from TiOx ReRAM devices for practical testing. Novel device metrics are proposed, and it is shown that these metrics can present crucial insights on the device population prior to training the hardware network. Results on a multi-layer perceptron trained on MNIST show that device models based on binning deviate from target network accuracy at a low number of points and high switching noise in the device dataset. The proposed approach opens the possibility for device-algorithm co-design investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.