Abstract

There is an urgent need to develop new methods to monitor the state of the environment. One potential approach is to use new data sources, such as User-Generated Content, to augment existing approaches. However, to date, studies typically focus on a single date source and modality. We take a new approach, using citizen science records recording sightings of red kites (Milvus milvus) to train and validate a Convolutional Neural Network (CNN) capable of identifying images containing red kites. This CNN is integrated in a sequential workflow which also uses an off-the-shelf bird classifier and text metadata to retrieve observations of red kites in the Chilterns, England. Our workflow reduces an initial set of more than 600,000 images to just 3065 candidate images. Manual inspection of these images shows that our approach has a precision of 0.658. A workflow using only text identifies 14% less images than that including image content analysis, and by combining image and text classifiers we achieve almost perfect precision of 0.992. Images retrieved from social media records complement those recorded by citizen scientists spatially and temporally, and our workflow is sufficiently generic that it can easily be transferred to other species.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.