In this paper, we investigate the design and implementation of Where's The Bear (WTB), an end-to-end, distributed, IoT system for wildlife monitoring. WTB implements a multi-tier (public/private cloud, edge, sensing) system that integrates recent advances in machine learning based image processing to automatically classify animals in images from remote, motion-detection camera traps. We use non-local, resource-rich, public/private cloud systems to train the machine learning models, and ``in-the-field,'' resource-constrained edge systems to perform classification near the IoT sensing devices (cameras). WTB relieves scientists and citizen scientists of the burden of manual image classification and saves time and bandwidth for image transfer off-site by automatically filtering the images on-site based on characteristics of interest.
We deploy this system at the UCSB Sedgwick Reserve, a 6000 acre site for environmental research and use it to aggregate, manage, and analyze over 1.12M images. WTB integrates Google TensorFlow and OpenCV applications to perform automatic classification and tagging for a subset of these images. To avoid transferring large numbers of training images for TensorFlow over a low-bandwidth network linking Sedgwick to the public/private clouds, we devise a technique that uses stock Google Images to construct a training set using only a small number of empty, background images from Sedgwick. Our system is able to accurately identify bears, deer, and coyotes and significantly reduces the time and bandwidth requirements for image transfer.