Abstract

Saliency prediction models aim to mimic the human visual system’s attention process, and the research has made significant progress due to recent advancements in deep convolution neural networks. However, the high memory requirements and intensive computational demands make these approaches less suitable for Internet-of-Things (IoT) devices, and there is a need for an improved computational efficiency and reduced memory footprint to facilitate distributed IoT intelligence. This article proposes a pseudoknowledge distillation (PKD) training method for creating a compact real-time saliency prediction model. The proposed method can effectively transfer knowledge from computationally expensive once-for-all (OFA-595) as a single teacher model and a combination of OFA-595 and EfficientNet-B7 as a multiteacher model to an early exit evolutionary algorithm network student model by utilizing knowledge distillation and pseudolabeling. Five saliency benchmark datasets are used to demonstrate PKD’s improved prediction performance and its reduced inference time without modifying the original student model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call