The ptychographic iterative engine (PIE) is a widely used algorithm that enables phase retrieval at nanometer-scale resolution over a wide range of imaging experiment configurations. By analyzing diffraction intensities from multiple scanning locations where a probing wavefield interacts with a sample, the algorithm solves a difficult optimization problem with constraints derived from the experimental geometry as well as sample properties. The effectiveness at which this optimization problem is solved is highly dependent on the ordering in which we use the measured diffraction intensities in the algorithm, and random ordering is widely used due to the limited ability to escape from stagnation in poor-quality local solutions. In this study, we introduce an extension to the PIE algorithm that uses ideas popularized in recent machine learning training methods, in this case minibatch stochastic gradient descent. Our results demonstrate that these new techniques significantly improve the convergence properties of the PIE numerical optimization problem.