In this paper, we propose a vision-based estimation method for autonomous landing of fixed-wing aircraft for Advanced Air Mobility. The concept of autonomous flight with minimal human intervention has gained significant interest with a particular focus on autonomous landing. The goal is to overcome the limitations of existing instrument landing systems and reduce reliance on GNSS during aircraft landing. The proposed system leverages the following techniques for high-performance and real-time operations in various real-world environments and during all stages of landing. A deep learning segmentation model was directly learned, created, and used for robust runway recognition performance against environmental changes around the runway. Algorithms were designed to adapt to different flight stages considering the varying information offered by image data when the aircraft is in mid-air and on the ground. The relative lateral position from the aircraft to the runway was estimated using bird’s-eye-view conversion and inverse projection techniques instead of conventional perspective-n-point methods for reducing the recognition error occurring from the conversion between 2D image and 3D world. Finally, the mixed-precision technique was used for the segmentation model to improve inference speed and ensure real-time operation in real-world deployment. Estimation stability and reliability were improved by using a Kalman Filter to cope with sensor uncertainties caused by the real-world flight environment. Consequently, we show that the average error in the lateral position estimation by the proposed method is comparable to the accuracy of a single GNSS and demonstrate successful autonomous landing of our test aircraft with a set of flight tests.