Abstract

This paper describes an optimal guidance policy for a vehicle to reach a relative position to a target, using information from a single flxed camera. Applying an extended Kalman flltering method, both of a velocity and a position of the vehicle relative to the target, can be estimated. However, estimates of a distance between the vehicle and the target are much worse than those of the other states. Therefore, in this paper, an optimal guidance policy is introduced that can reach the destination while maximizing the predicted accuracy of the range estimation. By limiting vehicle motion to a two dimensionals, the exact solution for control inputs for this optimization problem can be obtained. Simulation results show that the resulting optimal guidance policy gives far more accurate range estimation than a simple linear guidance policy. The automation of unmanned aerial vehicles(UAVs), has progressively developed in recent years. The development of sensors, such as GPS, made a large contribution. In most cases, UAV automated ∞ight control has been achieved by using multi- sensor fusion to estimate vehicle states accurately. However, because this becomes a complex and expensive system and not suitable for small unreusable UAVs, there is a need to make an autonomous ∞ight system which is simpler and less expensive. As seen in nature birds or insects, vision information can be almost exclusively utilized in an autonomous ∞ight system. This has the potential to improve system reliability, performance, and cost. Inspired from this idea, this paper introduces a method to navigate and guide a vehicle to reach a target using vision information from a single camera flxed to the vehicle. In this approach, an extended Kalman fllter(EKF) is utilized to estimate relative velocity and position. A single camera provides measurements of target horizontal and vertical position in its image, with noise. By applying an EKF to these two measurements, estimates of the relative position and velocity are obtained. The EKF estimation performance depends on camera motion, and the estimation will be improved by an appropriate trajectory generation. 1 In the case of the vehicle approaching straight toward the target, the velocity and position in the camera image plane can be estimated with relatively good accuracy. However, the estimation of distance between the vehicle(or camera) and the target includes large errors. If we use these poor range estimates to control a vehicle performing station-keeping with the target, a dangerous overshoot may occur. The reason for this large estimation error is that the vehicle does not necessary have enough lateral motion to estimate this depth. It is well known that the accuracy of range estimation depends on camera translating motion, and the best translation for range estimation is a motion parallel to its image plane. 2 Therefore, if we let the vehicle ∞y to the target with meandering path, we can obtain the more accurate range estimates and avoid the overshoot. Through the EKF, the variance of estimation errors for each state is available. To maximize the estimation accuracy, one should minimize the variance. In other words, to obtain the optimal ∞ight path to the relative position to the target, we have to set the performance index in order to minimize the variance of the estimation error of range and solve the resulting optimization problem subjected to a known camera motion. In this paper, we show that we can solve this problem analytically. Then, comparing simulation results of

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.