Natural disasters bring huge loss of life and property to human beings. Unmanned aerial vehicles (UAVs) own the advantages of high mobility, high flexibility, and rapid deployment, and are important equipment during post-disaster rescue. However, UAVs usually have restricted battery and computing power. They are not fit for performing compute-intensive tasks during rescue. Since there are widespread parking resources in a city, multiple parked vehicles working together to compute the applications from UAVs in a post-disaster rescue is investigated to ensure the quality of experience (QoE) of the UAVs. To execute uploaded task effectively, surviving parked vehicles within the monitoring range of an UAV are arranged into a cluster as much as possible. Then, the task execution cost is analyzed. Furthermore, a deep reinforcement learning (DRL)-based offloading policy is constructed, which interacts with the environment in an intelligent way to achieve optimization goals. The simulation experiments show that the proposed offloading scheme has a higher task completion rate and a lower task execution cost than other baselines schemes.