Abstract

Embedding the ability of sentiment analysis in smart devices is especially challenging because sentiment analysis relies on deep neural networks, in particular, convolutional neural networks. The paper presents a novel hardware-friendly detector of image polarity, enhanced with the ability of saliency detection. The approach stems from a hardware-oriented design process, which trades off prediction accuracy and computational resources. The eventual solution combines lightweight deep-learning architectures and post-training quantization. Experimental results on standard benchmarks confirmed that the design strategy can infer automatically the salient parts and the polarity of an image with high accuracy. Saliency-based solutions in the literature prove impractical due to their considerable computational costs; the paper shows that the novel design strategy can deploy and perform successfully on a variety of commercial smartphones, yielding real-time performances.

Highlights

  • The availability of ever-increasing computational power and the diffusion of distributed computing allow to apply deeplearning paradigms to complex problems

  • Deep learning lies at the core of a variety of applications supported by smart devices, but power consumption and hardware constraints tend to limit the deployment of those learning models

  • Sentiment analysis is a most interesting, yet challenging application relying on deep learning [1, 2], since it aims to extract the emotional information conveyed by media contents

Read more

Summary

Introduction

The availability of ever-increasing computational power and the diffusion of distributed computing allow to apply deeplearning paradigms to complex problems. Convolutional neural networks (CNNs), for example, represent a key tool to deal with image/video processing domains, but convey a notable effort in the architecture design and bring about a considerable computational cost. This makes the real-time implementation of CNNs on embedded systems a very challenging task. That problem calls for a multidisciplinary approach, involving cognitive models [3], computational resources [4], Natural Language Processing [5, 6], and multimodal analysis [7]. To tackle the so-called subjective perception problem [8], i.e., different users perceive the same image in different ways, designers often envision custom solutions, involving both algorithms and hardware implementations

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call