Deep neural network (DNN) models, particularly convolutional neural networks (CNNs), have demonstrated remarkable performance in biomedical image classification due to their ability to automatically learn features from large datasets. One common challenge in the preparation of large, microscopic datasets for DNN tasks is sample defocusing, potentially impairing the model performance. To handle defocusing, computational imaging, or specifically quantitative phase imaging (QPI), performs digital refocusing by using both the phase and the amplitude of the complex optical field. This leads us to investigate whether feeding the complex field into DNN would potentially address the defocusing problem as in-focus information is implicitly encoded in the complex field. In this paper, we assess the feasibility of employing neural networks to directly process full amplitude and phase data from a defocus plane without digital refocusing. Our specific focus lies in understanding the tolerance for defocus in image classification neural networks when amplitude and phase are taken as inputs. To accomplish this, we used Fourier ptychography microscopy (FPM) to acquire in-focus phase and amplitude images of two distinct object types – normal red blood cells and echinocytes. We then digitally propagate the complex field to generate progressively defocused images of the samples to serve as training and testing datasets for image classification neural networks. While the digitally defocused images contain sufficient information to recover the original in-focus images, we observed that current standard implementations of deep learning models are unable to effectively utilize the defocused field to distinguish between the two cell types. We conclude that the physical-model-based digital refocusing capability of QPI remains indispensable for overcoming defocusing issues in current standard DNN models.