This study highlights the evolving landscape of object detection methodologies, emphasizing the superiority of deep learning-based approaches over traditional methods. Particularly in intelligent transportation systems-related applications requiring robust image processing techniques, such as vehicle identification, localization, tracking, and counting within traffic scenarios, deep learning has gained substantial traction. The YOLO algorithm, in its various iterations, has emerged as a popular choice for such tasks, with YOLOv5 garnering significant attention. However, a more recent iteration, YOLOv8, was introduced in early 2023, ushering in a new phase of exploration and potential innovation in the field of object detection. Consequently, due to its recent emergence, the number of studies on YOLOv8 is extremely limited, and an application in the field of Intelligent Transportation Systems (ITS) has not yet found its place in the existing literature. In light of this gap, this study makes a noteworthy contribution by delving into vehicle detection using the YOLOv8 algorithm. Specifically, the focus is on targeting aerial images acquired through a modified autonomous UAV, representing a unique avenue for the application of this cutting-edge algorithm in a practical context. The dataset employed for training and testing the algorithm was curated from a diverse collection of traffic images captured during UAV missions. In a strategic effort to enhance the variability of vehicle images, the study systematically manipulated flight patterns, altitudes, orientations, and camera angles through a custom-designed and programmed drone. This deliberate approach aimed to bolster the algorithm's adaptability across a wide spectrum of scenarios, ultimately enhancing its generalization capabilities. To evaluate the performance of the algorithm, a comprehensive comparative analysis was conducted, focusing on the YOLOv8n and YOLOv8x submodels within the YOLOv8 series. These submodels were subjected to rigorous testing across diverse lighting and environmental conditions using the dataset. Through tests, it was observed that YOLOv8n achieved an average precision of 0.83 and a recall of 0.79, whereas YOLOv8x attained an average precision of 0.96 and a recall of 0.89. Furthermore, YOLOv8x also outperformed YOLOv8n in terms of F1 score and mAP, achieving values of 0.87 and 0.83 respectively, compared to YOLOv8n's 0.81 and 0.79. These outcomes of the evaluation illuminated the relative strengths and weaknesses of YOLOv8n and YOLOv8x, leading to the conclusion that YOLOv8n is well-suited for real-time ITS applications, while YOLOv8x exhibits superior detection capabilities.