Optimization of storage using neural networks is now commonly achieved by solving a single optimization problem. We first show that this approach allows solving high-dimensional storage problems, but is limited by memory issues. We propose a modification of this algorithm based on the dynamic programming principle and propose neural networks that outperform classical feedforward networks to approximate the Bellman values of the problem. Finally, we study the stochastic linear case and show that Bellman values in storage problems can be accurately approximated using conditional cuts computed by a very recent neural network proposed by the author. This new approximation method combines linear problem solving by a linear programming solver with a neural network approximation of the Bellman values.