Leveraging diverse optomechanical and imaging technologies for precision agriculture applications is gaining attention in emerging economies. The precise spatial detection of plant objects in farms is crucial for optimizing plant-level nutrition and managing pests and diseases. High-resolution remote sensors mounted on drones have been increasingly deployed for large-scale crop mapping and field variability characterization. While field-level crop identification and crop-soil discrimination have been studied extensively, within-plant canopy discrimination of crop and soil has not been explored in real agricultural farms. The objectives of this study are: (i) adoption and assessment of spectral unmixing for discriminating crop and soil at within-plant canopy level, and (ii) generation of benchmark terrestrial and drone-based hyperspectral datasets for plant or sub-plant level discrimination using various spectral mixture modelling approaches and sources of endmembers. We acquired hyperspectral imagery of vegetable crops using a frame-based sensor mounted on a drone flying at different heights. Further, several linear, non-linear, and sparse-based spectral unmixing methods were used to discriminate plant and soil based on spectral signatures (endmembers) extracted from different spectral libraries prepared using in situ or field, ground-based, and drone-based hyperspectral imagery. The results, validated against pixel-to-pixel ground truth data, indicate an overall crop-soil discrimination accuracy of 99–100%, subject to a combination of endmember source and flying height. The influences of different endmember sources, spatial resolution as indicated by flying height, and inversion algorithms on the quality of estimated abundances are assessed from a verifiable and functionally relevant perspective. The generated hyperspectral datasets and ground truth data can be used for developing and testing new methods for sub-canopy level soil-crop discrimination in various agricultural applications of remote sensing.