In order to address the challenges encountered in visual navigation for asteroid landing using traditional point features, such as significant recognition and extraction errors, low computational efficiency, and limited navigation accuracy, a novel approach for multi-type fusion visual navigation is proposed. This method aims to overcome the limitations of single-type features and enhance navigation accuracy. Analytical criteria for selecting multi-type features are introduced, which simultaneously improve computational efficiency and system navigation accuracy. Concerning pose estimation, both absolute and relative pose estimation methods based on multi-type feature fusion are proposed, and multi-type feature normalization is established, which significantly improves system navigation accuracy and lays the groundwork for flexible application of joint absolute-relative estimation. The feasibility and effectiveness of the proposed method are validated through simulation experiments through 4769 Castalia.