Abstract
Over the next 5 years major advances in the development and application of numerous technologies related to computing, mobile phones, artificial intelligence (AI), and augmented reality (AR) will have a dramatic impact in biodiversity monitoring and conservation. Over a 2-week period several of us had the opportunity to meet with multiple technology experts in the Silicon Valley, California, USA to discuss trends in technology innovation, and how they could be applied to conservation science and ecology research. Here we briefly highlight some of the key points of these meetings with respect to AI and Deep Learning. Computing: Investment and rapid growth in AI and Deep Learning technologies are transforming how machines can perceive the environment. Much of this change is due to increased processing speeds of Graphics Processing Units (GPUs), which is now a billion-dollar industry. Machine learning applications, such as convolutional neural networks (CNNs) run more efficiently on GPUs and are being applied to analyze visual imagery and sounds in real time. Rapid advances in CNNs that use both supervised and unsupervised learning to train the models is improving accuracy. By taking a Deep Learning approach where the base layers of the model are built upon datasets of known images and sounds (supervised learning) and later layers relying on unclassified images or sounds (unsupervised learning), dramatically improve the flexibility of CNNs in perceiving novel stimuli. The potential to have autonomous sensors gathering biodiversity data in the same way personal weather stations gather atmospheric information is close at hand. Mobile Phones: The phone is the most widely used information appliance in the world. No device is on the near horizon to challenge this platform, for several key reasons. First, network access is ubiquitous in many parts of the world. Second, batteries are improving by about 20% annually, allowing for more functionality. Third, app development is a growing industry with significant investment in specializing apps for machine-learning. While GPUs are already running on phones for video streaming, there is much optimism that reduced or approximate Deep Learning models will operate on phones. These models are already working in the lab, with the biggest hurdle being power consumption and developing energy efficient applications and algorithms to run complicated AI processes will be important. It is just a matter of time before industry will have AI functionality on phones. These rapid improvements in computing and mobile phone technologies have huge implications for biodiversity monitoring, conservation science, and understanding ecological systems. Computing: AI processing of video imagery or acoustic streams create the potential to deploy autonomous sensors in the environment that will be able to detect and classify organisms to species. Further, AI processing of Earth spectral imagery has the potential to provide finer grade classification of habitats, which is essential in developing fine scale models of species distributions over broad spatial and temporal extents. Mobile Phones: increased computing functionality and more efficient batteries will allow applications to be developed that will improve an individual’s perception of the world. Already AI functionality of Merlin improves a birder’s ability to accurately identify a bird. Linking this functionality to sensor devices like specialized glasses, binoculars, or listening devises will help an individual detect and classify objects in the environment. In conclusion, computing technology is advancing at a rapid rate and soon autonomous sensors placed strategically in the environment will augment the species occurrence data gathered by humans. The mobile phone in everyone’s pocket should be thought of strategically, in how to connect people to the environment and improve their ability to gather meaningful biodiversity information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.