Winter wheat and winter rye often suffer from nutrient deficiencies due to variations in fertilizer usage, soil conditions, and other environmental factors. The timely and precise detection of these deficiencies, traditionally conducted through manual field inspection and soil testing, lacks scalability and often delays necessary interventions. To overcome these limitations, I conducted original research exploring the use of a Vision Transformer neural network, specifically the Swin Transformer V2, to classify crops into one of seven nutrient categories from UAV-based imagery. Two distinct, annotated datasets consisting of winter wheat (WW2020) and winter rye (WR2021) were used. Each dataset was subjected to an independent model training to account for visual differences in nutrient deficiencies between the crops. This approach leverages the advantages of the Vision Transformer, which has the capability to discern intricate patterns over expansive spatial regions. Top-1 accuracy was used to evaluate model predictions. A train-validation split yielded 67.7% accuracy for WW2020 and 68.2% for WR2021, while without a split, accuracy improved to 78.0% for WW2020 and 82.3% for WR2021. The Swin Transformer V2 shows promise for detecting nutrient deficiencies in winter crops. However, further fine-tuning and data collection are required to improve performance.