Abstract

We investigate the design and implementation of Where's The Bear (WTB), an end-to-end, distributed, IoT system for wildlife monitoring.WTB implements a multi-tier (cloud, edge, sensing) system that integrates recent advances in machine learning based image processing to automatically classify animals in images from remote, motion-triggered camera traps.We use non-local, resource-rich, public/private cloud systems to train the machine learning models, and ``in-the-field,'' resource-constrained edge systems to perform classification near the IoT sensing devices (cameras).We deploy WTB at the UCSB Sedgwick Reserve, a 6000 acre site for environmental research and use it to aggregate, manage, and analyze over 1.12M images.WTB integrates Google TensorFlow and OpenCV applications to perform automatic classification and tagging for a subset of these images.To avoid transferring large numbers of training images for TensorFlow over a low-bandwidth network linking Sedgwick to the public/private clouds, we devise a technique that uses stock Google Images to construct a synthetic training set using only a small number of empty, background images from Sedgwick.Our system is able to accurately identify bears, deer, coyotes, and empty images and significantly reduces the time and bandwidth requirements for image transfer, as well as end-user analysis time, since WTB automatically filters the images on-site.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call