Abstract

Malaria is a severe public health problem worldwide, with some developing countries being most affected. Reliable remote diagnosis of malaria infection will benefit from efficient compression of high-resolution microscopic images. This paper addresses a lossless compression of malaria-infected red blood cell images using deep learning. Specifically, we investigate a practical approach where images are first classified before being compressed using stacked autoencoders. We provide probabilistic analysis on the impact of misclassification rates on compression performance in terms of the information-theoretic measure of entropy. We then use malaria infection image datasets to evaluate the relations between misclassification rates and actually obtainable compressed bit rates using Golomb–Rice codes. Simulation results show that the joint pattern classification/compression method provides more efficient compression than several mainstream lossless compression techniques, such as JPEG2000, JPEG-LS, CALIC, and WebP, by exploiting common features extracted by deep learning on large datasets. This study provides new insight into the interplay between classification accuracy and compression bitrates. The proposed compression method can find useful telemedicine applications where efficient storage and rapid transfer of large image datasets is desirable.

Highlights

  • Malaria occurs in nearly 100 countries worldwide, imposing a huge toll on human health and heavy socioeconomic burdens on developing countries [1]

  • We study how the performance of lossless compression on red blood cell images is affected by an imperfect classifier in a realistic setting where images are first classified prior to being compressed using deep learning methods based on stacked autoencoders

  • We provide an in-depth analysis on the impact of misclassification rates on the overall image compression performance and derive formulas for both empirical entropy and average codeword lengths based on Golomb–Rice codes for residues

Read more

Summary

Introduction

Malaria occurs in nearly 100 countries worldwide, imposing a huge toll on human health and heavy socioeconomic burdens on developing countries [1]. The agents of malaria are mosquito-transmitted Plasmodium parasites. Microscopy is the gold standard for diagnosis; manual blood smear evaluation depends on time-consuming, error-prone, and repetitive processes requiring skilled personnel [2]. Plasmodium characterization and classification from digitized blood smear images [3,4,5,6,7]. Traditional algorithms labeled images using manually designed feature extraction, with drawbacks in both time-to-solution and accuracy [4]. Leveraging high-performance computing, deep machine learning algorithms could potentially drive true artificial intelligence in malaria research. The convergence of mobile computing, the Internet, and biomedical instrumentation allows the worldwide transfer of biomedical images for telemedicine applications. Consultation or screening by specialists located in geographically different locations is possible

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.