In digital farming, the use of technology to increase agricultural production through automated tasks has recently integrated the development of AgBots for more reliable data collection using autonomous navigation. These AgBots are equipped with various sensors such as GNSS, cameras, and LiDAR, but these sensors can be prone to limitations such as low accuracy for under-canopy navigation with GNSS, sensitivity to outdoor lighting and platform vibration with cameras, and LiDAR occlusion issues. In order to address these limitations and ensure robust autonomous navigation, this paper presents a sensor selection methodology based on the identification of environmental conditions using sensor data. Through the extraction of features from GNSS, images, and point clouds, we are able to determine the feasibility of using each sensor and create a selection vector indicating its viability. Our results demonstrate that the proposed methodology effectively selects between the use of cameras or LiDAR within crops and GNSS outside of crops, at least 87% of the time. The main problem found is that, in the transition from inside to outside and from outside to inside the crop, GNSS features take 20 s to adapt. We compare a variety of classification algorithms in terms of performance and computational cost and the results show that our method has higher performance and lower computational cost. Overall, this methodology allows for the low-cost selection of the most suitable sensor for a given agricultural environment.