Abstract

With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

Highlights

  • For many years, various wireless sensor networks (WSNs) have been deployed to digitalize and understand many physical aspects in the real-world

  • By reducing the number of per-pixel bits from eight to five (e.g., 32 colors), we can significantly reduce the image size, Convolutional neural network-based classification system design with compressed wireless sensor network images which in turn can reduce the energy usage caused from image transmissions

  • We propose an energy-efficient image processing mechanism combining image scaling and color quantization techniques that can significantly reduce the amount of data transferred on resource limited nodes, while maintaining the classification accuracy of the a Convolutional Neural Networks (CNNs) model at the backend server

Read more

Summary

Introduction

Various wireless sensor networks (WSNs) have been deployed to digitalize and understand many physical aspects in the real-world. Learning algorithms are still very complex and are challenging to apply in WSNs directly due to their computational overhead and energy constraints Instead, these data analytics tools can function on the backend server if the sensor nodes can collect enough data for the model training process. Convolutional neural network-based classification system design with compressed wireless sensor network images images can maintain the context detection accuracy of a CNN at *98% while reducing network transmission overhead by *71%. We design a low computational overhead lossy image compression scheme for embedded devices in wireless image sensor network to reduce battery consumption of WISN nodes significantly, while maintaining the classification accuracy of CNN at the backend server.

Scenario and system design
Decreasing resolution
Color quantization
Image resize
Computational cost and energy usage
Image classification
Convolutional neural networks for image classification
Filtering corrupted images
Building and evaluating the CNN model
Related work
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call