Unmanned Aerial Vehicle (UAV) aerial sensors are an important means of collecting ground image data. Through the road segmentation and vehicle detection of drivable areas in UAV aerial images, they can be applied to monitoring roads, traffic flow detection, traffic management, etc. As well, they can be integrated with intelligent transportation systems to support the related work of transportation departments. Existing algorithms only realize a single task, while intelligent transportation requires the simultaneous processing of multiple tasks, which cannot meet complex practical needs. However, UAV aerial images have the characteristics of variable road scenes, a large number of small targets, and dense vehicles, which make it difficult to complete the tasks. In response to these issues, we propose to implement road segmentation and on-road vehicle detection tasks in the same framework for UAV aerial images, and we conduct experiments on a self-constructed dataset based on the DroneVehicle dataset. For road segmentation, we propose a new algorithm C-DeepLabV3+. The new algorithm introduces the coordinate attention (CA) module, which can obtain more accurate segmentation target location information and make the segmentation target edges more continuous. Also, the improved algorithm introduces the cascade feature fusion module to prevent the loss of detail information in road segmentation and to obtain better segmentation performance. For vehicle detection, we propose an improved algorithm S-YOLOv5 by adding a parameter-free lightweight attention module SimAM. Finally, the proposed road segmentation-vehicle detection framework is utilized to unite the C-DeepLabV3+ and S-YOLOv5 algorithms for the implementation of the serial tasks. The experimental results show that on the constructed ViDroneVehicle dataset, the C-DeepLabV3+ algorithm has an mPA value of 98.75% and an mIoU value of 97.53%, which can better segment the road area and solve the problem of occlusion. The mAP value of the S-YOLOv5 algorithm has an mAP value of 97.40%, which is more than YOLOv5's 96.95%, which effectively reduces the vehicle omission and false detection rates. By comparison, the results of both algorithms are superior to multiple state-of-the-art methods. The overall framework proposed in this paper has superior performance and is capable of realizing high-quality and high-precision road segmentation and vehicle detection from UAV aerial images.