PurposeWith the rapid advancement of lifestyle and technology, human lives are becoming increasingly threatened. Accidents, exposure to dangerous substances and animal strikes are all possible threats. Human lives are increasingly being harmed as a result of attacks by wild animals. Further investigation into the cases reported revealed that such events can be detected early on. Techniques such as machine learning and deep learning will be used to solve this challenge. The upgraded VGG-16 model with deep learning-based detection is appropriate for such real-time applications because it overcomes the low accuracy and poor real-time performance of traditional detection methods and detects medium- and long-distance objects more accurately. Many organizations use various safety and security measures, particularly CCTV/video surveillance systems, to address physical security concerns. CCTV/video monitoring systems are quite good at visually detecting a range of attacks associated with suspicious behavior on the premises and in the workplace. Many have indeed begun to use automated systems such as video analytics solutions such as motion detection, object/perimeter detection, face recognition and artificial intelligence/machine learning, among others. Anomaly identification can be performed with the data collected from the CCTV cameras. The camera surveillance can generate enormous quantities of data, which is laborious and expensive to screen for the species of interest. Many cases have been recorded where wild animals enter public places, causing havoc and damaging lives and property. There are many cases where people have lost their lives to wild attacks. The conventional approach of sifting through images by eye can be expensive and risky. Therefore, an automated wild animal detection system is required to avoid these circumstances.Design/methodology/approachThe proposed system consists of a wild animal detection module, a classifier and an alarm module, for which video frames are fed as input and the output is prediction results. Frames extracted from videos are pre-processed and then delivered to the neural network classifier as filtered frames. The classifier module categorizes the identified animal into one of the several categories. An email or WhatsApp notice is issued to the appropriate authorities or users based on the classifier outcome.FindingsEvaluation metrics are used to assess the quality of a statistical or machine learning model. Any system will include a review of machine learning models or algorithms. A number of evaluation measures can be performed to put a model to the test. Among them are classification accuracy, logarithmic loss, confusion matrix and other metrics. The model must be evaluated using a range of evaluation metrics. This is because a model may perform well when one measurement from one evaluation metric is used but perform poorly when another measurement from another evaluation metric is used. We must utilize evaluation metrics to guarantee that the model is running correctly and optimally.Originality/valueThe output of conv5 3 will be of size 7*7*512 in the ImageNet VGG-16 in Figure 4, which operates on images of size 224*224*3. Therefore, the parameters of fc6 with a flattened input size of 7*7*512 and an output size of 4,096 are 4,096, 7*7*512. With reshaped parameters of dimensions 4,096*7*7*512, the comparable convolutional layer conv6 has a 7*7 kernel size and 4,096 output channels. The parameters of fc7 with an input size of 4,096 (i.e. the output size of fc6) and an output size of 4,096 are 4,096, 4,096. The input can be thought of as a one-of-a-kind image with 4,096 input channels. With reshaped parameters of dimensions 4,096*1*1*4,096, the comparable convolutional layer conv7 has a 1*1 kernel size and 4,096 output channels. It is clear that conv6 has 4,096 filters, each with dimensions 7*7*512, and conv7 has 4,096 filters, each with dimensions 1*1*4,096. These filters are numerous, large and computationally expensive. To remedy this, the authors opt to reduce both their number and the size of each filter by subsampling parameters from the converted convolutional layers. Conv6 will use 1,024 filters, each with dimensions 3*3*512. Therefore, the parameters are subsampled from 4,096*7*7*512 to 1,024*3*3*512. Conv7 will use 1,024 filters, each with dimensions 1*1*1,024. Therefore, the parameters are subsampled from 4,096*1*1*4,096 to 1,024*1*1*1,024.
Read full abstract