Abstract
In recent years, Artificial Intelligence (AI) has played an important role in our daily life. Especially convolutional neural network (CNN) in medical image analysis has been getting more and more attention recently. Using CNN on a portable medical device (such as edge devices) can give out a handy yet accurate disease diagnosis in the medical field. However, the CNNs require a high amount of computing resources which limit the use of CNNs on edge devices. By reducing the size of CNN models while keeping high accuracy, it is easier to integrate CNN onto an edge device for real-time uses. This paper investigates and compares two strategies for CNN quantization to reduce the number of resources required. The first two-stages strategy is based on post-training quantization where the CNN is firstly trained conventionally then post-quantized to achieve its lightweight version. The second one-stage strategy conducts the quantization directly during CNN model training. On the one hand, we discuss the main advantages as well as the drawbacks of each strategy. On the other hand, we implement a state-of-the-art CNN model (i.e. MobileNet-V2) and quantitatively evaluate the performance of the original model with two lightweight models produced by post-quantization and aware training quantization in terms of accuracy and memory requirement. Experiments, conducted on a dataset of endoscopic images for the task of anatomical landmarks classification, show the potential to deploy such techniques on edge devices.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.