Abstract

This section highlights new and emerging areas of technology and methodology. Topics may range from hardware and software, to statistical analyses and technologies that could be used in ecological research. Articles should be no longer than a few thousand words, and should be sent to the editors, David Inouye (E-mail: inouye@umd.edu) or Sam Scheiner (E-mail: sschein@nsf.gov). The invention of the camera trap by George Shiras in the late 1890s and its widespread adoption by hunters 100 years later has armed scientists and managers with a powerful noninvasive tool to collect data on wildlife (Sanderson and Trolle 2005). Imagery from camera traps supports ecological investigations, inventory and monitoring networks, and cataloging biodiversity (e.g., Karanth and Nichols 1998, MacKenzie et al. 2005, Trolle et al. 2007, Stein et al. 2008). Such use of camera traps continues to expand in the number of cameras deployed and images taken (Kays and Slauson 2008, Kays et al. 2009). Yet this increase creates a paradox. While practitioners seek more data to improve analyses, they buckle under mounds of imagery piling up before them. This situation engenders four problems. First, because cataloging imagery is slow, image identification lags behind acquisition, and many images remain unidentified. Second, user entry is tedious, causing errors (Maydanchik 2007). Third, inconsistent filing and naming conventions complicate data retrieval and sharing (Chaudhary et al. 2010). Fourth, the struggle to keep pace with the acquisition and management of data from existing camera traps slows the deployment of additional cameras (and subsequent data acquisition). These four problems stem from two general issues: inability to address volumes of imagery, and the lack of systematic organization. With few tools presently available, users have addressed them by either storing raw images, or using ad hoc labeling and cataloging. The former means much data sits unanalyzed, and the latter complicates data retrieval, analysis, and collaboration. Just as Chaudhary et al. (2010) found, across-site comparisons and meta-analyses are nearly nonexistent. The few software tools now available offer limited data analysis capability (Camera Base 2007). Even established global monitoring networks such as the TEAM Network (2008) advocate using a spreadsheet with hand entry to record data gleaned from both digital and film camera traps (Kays et al. 2009). As a result, camera trapping is an underutilized tool. To address these issues we offer a three-step, standardized procedure to retrieve, label, store, analyze, and disseminate camera trap data. The methodology relies solely on open-source software and two computer programs we created. Our procedure is fast and simple and does not require hand data entry, thus greatly reducing data entry errors (Maydanchik 2007). Output from our analysis software can be directly imported to other analysis programs (e.g., PRESENCE (MacKenzie et al. 2005)), and standard spreadsheets used elsewhere (TEAM Network 2008). The analysis program also calculates 18 popular parameters commonly examined by ecologists and wildlife managers (Table 1). These 18 parameters serve as examples, as the potential for expansion is self-evident. We illustrate these parameters by summarizing the analysis of data obtained from a camera-trapping project in south-central Arizona (~30,000 images) and Suriname, South America (~75,000 images). We are presently organizing camera programs throughout the southwestern USA following this methodology (i.e., presently ~300,000 images from Arizona and New Mexico, and building). Networks of camera traps are capable of producing many thousands of images over modest time periods. For example, two dozen camera traps at Sevilleta National Wildlife Refuge in New Mexico, USA produced more than 110,000 useful images in just six months. Although data retrieval from the camera traps and image file storage is trivial, data entry is time consuming and error prone, since it is often done by hand (Maydanchik 2007). Our objectives were to increase data entry and analysis speed while minimizing data entry errors. This was achieved by restricting user interactions with image file data through the use of software tools. Data retrieval, image file labeling, storage, and data analysis is a three-step process. These procedures are enabled by open-source software and software that we produced. Meta-analyses are achieved seamlessly, by standardizing each step (Chaudhary et al. 2010). Note that while digital images are more easily handled, we recognize that much camera trap data remains on film. Hence, we included film data entry in our procedure. Motivated by our own needs to collect, store, analyze, and share data from our camera trap programs, we developed the following three steps to increase the speed and accuracy of our efforts. (1a) The user visits a camera trap, removes the memory card, inserts the card into a computer, and moves the files to a temporary location folder, e.g., Location01Temp, which contains images taken from the camera trap at Location01. (1b) For film cameras, the film is developed, scanned, and images saved in an analogous temporary folder. (2a) Use the freeware program ReNamer (Kozlov 2009) to rename all files simultaneously in folder Location01Temp with the date and time that each picture was taken. This step solves two problems. First, all files are uniquely labeled by date and time (multiple images from the same location cannot be recorded at precisely the same time). Second, when a file is removed from the memory card, the labeling sequence is often reset to zero, so subsequent downloads could overwrite previously downloaded files. Renaming images by date and time prevents such a disaster. (2b) Film-derived images must be labeled by hand with the date and time. Subsequent steps can then be followed. (3) For each date-time image file in folder Location01Temp, the user identifies the species, and counts the number of individuals recorded. The image is then placed in the permanent Location01/Species/Number-of-individuals folder. For instance, if the image at Location01 was of two pumas (Felis concolor), the image file is placed into folder Location01/Puma/02. If multiple species were recorded, then multiple copies are stored in their respective species folders in the Location01 folder. Once the images are stored, they need no longer be examined again. The above three steps are repeated for each camera trap location. The result is a collection of location folders within which are folders labeled by each species recorded. Each image is ultimately stored in a folder labeled by the number-of-individuals recorded in the image. 1) A single DOS command is used to produce a systematic hierarchical text file list of all folder and file names in the location folders. The resulting text file contains all the information necessary to complete a full analysis of the camera trap data. For digital images, no data were ever entered by hand, thus reducing data entry errors. Data analysis is a two-step process using two computer programs we wrote. The first computer program uses the hierarchical text file to assemble a second text file, each of whose lines appear as: Location Species Year Month Day Hour Minute Second Number-of-Individuals Note that this file can be directly imported into a spreadsheet for permanent archiving. Also produced is a third text file that contains a list of locations and species that are used in the analysis (e.g., birds and mammals are often analyzed separately). The user edits this file, which includes dummy variables (placeholders) for each location's start and stop date, and UTM coordinates. The second computer program uses the latter two files to produce a full analysis of the camera trap imagery. The output from the analysis is a final text file containing all results. This information can subsequently be imported into a spreadsheet program. The output file also provides text that can be directly input to other analysis programs such as PRESENCE, which estimates the proportion of locations occupied by a species (MacKenzie et al. 2005). We are aware of the enormous volume of already-existing camera trap data. For camera trap data that are already stored in a text or spreadsheet format, it is possible to create the above required text files in their proper formats by other means. For instance, built-in spreadsheet functions allow complex manipulation of stored data. These functions can be used to create the data format above, which can then be saved in a text file. Following our procedures, raw data collected by camera trap programs undertaken in Suriname were relabeled with date and time using the open-source program ReNamer. All image files were examined, species were identified, and the numbers of individuals were counted. Images were stored in location, species, number-of-individuals folders. From this collection of images, software assembled these data and produced two text files, which were then input into our analysis program. A third camera trap program undertaken in south-central Arizona over a ten year period produced ~37,000 images, many of them film images. Following our procedure, these images were processed and stored similarly. The analysis program computed 18 parameters (Table 1) from each of the above data sets (in under 5 seconds apiece, on a standard laptop computer). A species accumulation curve (Fig. 1 and Rosenzweig 1995) and two species' activity patterns (Fig. 2) are illustrated. These 18 parameters are examples, as many other outputs can be calculated and quantified, such as diversity indices (e.g., Simpson's or Shannon), or rank–abundance diagrams, as these outputs rely on patterns of abundance and species richness, both quantifiable from remote camera data (Hill 1973, Tokeshi 1993). Such abstractions can be useful for summarizing the complex structure of biological communities, especially for comparative purposes. Species accumulation curve showing the number of days required to add the first record of each species (Rosenzweig 1995). Data from Suriname, South America (solid squares or rectangles) and southern Arizona (open circles). Species activity patterns comparing a nocturnal ringtail (Bassariscus astusus) (solid bars) with a diurnal squirrel (Sciurus arizonensis) (open bars). Data from southern Arizona. Meta-analysis capability was demonstrated by concatenating the above text files to produce a single input file. Without further editing, the combined data set was analyzed, illustrating that independent data sets collected by multiple camera trapping programs can be analyzed seamlessly. Many agencies and individuals collect camera trap data to investigate theoretical and applied ecological issues, answer inventory and monitoring questions, or for cataloging biodiversity. Unfortunately, the number of sites sampled exceeds the number of published results synthesized from the imagery collected. We have argued that several reasons account for this deficiency: attending to large numbers of images is unwieldy and time consuming; hand data entry is error-prone and tedious; no standardized protocols exist for retrieving, storing, analyzing, and sharing data; inability to address current volumes of data means fewer traps are set. Users need to know how to address volumes of imagery in systematic ways to promote data sharing and meta-analyses. Presently, the lack of automated tools to address these issues severely handicaps the widespread dissemination and use of camera trap data to support ecological research, monitoring and conservation programs. Our methodology remedies these deficiencies by minimizing errors, while providing simple software-assisted tools to standardize imagery labeling and storing, quicken analyses, and ease the sharing of information gained from camera-trapping programs. Indeed, our three-step procedure analyzes hundreds of thousands of images in seconds, and for digital images, we removed the burden of hand data entry by practitioners (which often limits the production of published results from these programs [Maydanchik 2007]). By facilitating data exchange, our method makes a valuable contribution toward addressing ecological questions and conservation issues that often transcend a given project and location. Now, meta-analyses founded on aggregations of camera trap data can more freely provide insightful information for segments of biodiversity, offer better responses to pressing research and management questions, and advance the capabilities of inventory and monitoring networks (Chaudhary et al. 2010). The authors wish to thank Roland Kays for providing details of his camera trap data analysis methodology and for suggestions to improve our procedures and methodology. We also thank Aaron Alvidrez, John Arnett, Chris Carbone, Jon Erz, Rachel Freund, Lisa Haynes, Jamie McCallum, Emil McCain, Lindsay Smythe, Kyle Thompson, and Mara Weisenberger for providing camera trap data and helpful comments. The findings and conclusions in this article are those of the author(s) and do not necessarily represent the views of the U.S. Fish and Wildlife Service. Programs, user manual, and sample input and output files. (Ecological Archives B091-002-S1). Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call