Abstract

We study deep neural networks for classification of images with quality distortions. Deep network performance on poor quality images can be greatly improved if the network is fine-tuned with distorted data. However, it is difficult for a single fine-tuned network to perform well across multiple distortion types. We propose a mixture of experts based ensemble method, MixQualNet, that is robust to multiple different types of distortions. The "experts" in our model are trained on a particular type of distortion. The output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The gating network is trained to predict weights for a particular distortion type and level. During testing, the network is blind to the distortion level and type, yet can still assign appropriate weights to the expert models. In order to reduce the computational complexity, we introduce weight sharing into the MixQualNet. We utilize the TreeNet weight sharing architecture as well as introduce the Inverted TreeNet architecture. While both weight sharing architectures reduce memory requirements, our proposed Inverted TreeNet also achieves improved accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.