Abstract Deep learning models for plant species identification rely on large annotated datasets. The Pl@ntNet system enables global data collection by allowing users to upload and annotate plant observations, leading to noisy labels due to diverse user skills. Achieving consensus is crucial for training, but the vast scale of collected data (number of observations, users and species) makes traditional label aggregation strategies challenging. Existing methods either retain all observations, resulting in noisy training data or selectively keep those with sufficient votes, discarding valuable information. Additionally, as many species are rarely observed, user expertise cannot be evaluated as an inter‐user agreement: otherwise, botanical experts would have a lower weight in the AI training step than the average user. Our proposed label aggregation strategy aims to cooperatively train plant identification AI models. This strategy estimates user expertise as a trust score per user based on their ability to identify plant species from crowdsourced data. The trust score is recursively estimated from correctly identified species given the current estimated labels. This interpretable score exploits botanical experts' knowledge and the heterogeneity of users. Subsequently, our strategy removes unreliable observations but retains those with limited trusted annotations, unlike other approaches. We evaluate Pl@ntNet's strategy on a newly released large subset of the Pl@ntNet database focused on European flora, comprising over 6 M observations and 800 K users. This anonymized dataset of votes and observations is released openly via Lefort, Affouard, et al. (2024). We demonstrate that estimating users' skills based on the diversity of their expertise enhances labelling performance. Our findings emphasize the synergy of human annotation and data filtering in improving AI performance for a refined training dataset. We explore incorporating AI‐based votes alongside human input in the label aggregation. This can further enhance human‐AI interactions to detect unreliable observations (even with few votes).
Read full abstract