Abstract

In the field of computer vision technology, deep learning of image processing has become an emerging research area. The semantic segmentation of an image is among the utmost essential and significant tasks in image-processing research, offering a wide range of application fields such as autonomous driving systems, medical diagnosis, surveillance security, etc. Thus far, many studies have suggested and developed neural network modules in deep learning. To the best of our knowledge, all existing neural networks for semantic segmentation have large parameter sizes and it is therefore unfeasible to implement those architectures in low-power and memory-limited embedded platforms such as FPGAs. Building an embedded platform with that architecture is possible after reducing the parameter size without affecting the module’s architecture. The quantization technique lowers the precision of the neural network parameters while mostly keeping the accuracy. In this paper, we propose a quantization algorithm for a semantic segmentation deep learning architecture, which reduces the parameter size by four to eight times with a negligible accuracy abatement. As long as the parameter size is reduced, the deep learning architecture is improved in terms of required storage, computational speed, and power efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call