Abstract
With the rapid development of the mobile internet of things (IoTs) and mobile sensing devices, a large amount of mobile computing-oriented applications have attracted attention both from industry and academia. Deep learning based methods have achieved great success in artificial intelligence (AI) oriented applications. To advance the development of AI-based IoT systems, effective and efficient algorithms are in urgent need for IoT Edge Computing. Time-series data classification is an ongoing problem in applications for mobile devices (e.g. music genre classification on mobile phones). However, the traditional methods require field expertise to extract handcrafted features from the time-series data. Deep learning has been demonstrated to be effective and efficient in this kind of data. Nevertheless, the existing works neglect some of the sequential relationships found in the time-series data, which are significant for time-series data classification. Considering the aforementioned limitations, we propose a hybrid architecture, named the parallel recurrent convolutional neural network (PRCNN). The PRCNN is an end-to-end training network that combines feature extraction and time-series data classification in one stage. The parallel CNN and Bi-RNN blocks focus on extracting the spatial features and temporal frame orders, respectively, and the outputs of two blocks are fused into one powerful representation of the time-series data. Then, the syncretic vector is fed into the softmax function for classification. The parallel network structure guarantees that the extracted features are robust enough to represent the time-series data. Moreover, the experimental results demonstrate that our proposed architecture outperforms the previous approaches applied to the same datasets. We also take the music data as an example to conduct contrastive experiments to verify that our additional parallel Bi-RNN block can improve the performance of time-series classification compared with utilizing CNNs alone.
Highlights
With the extensive utilization of various mobile devices, mobile computing has attracted attention both from industry and academia [1], [2]
The design of the Bi-Recurrent neural networks [41] (RNNs) block is motivated by two main considerations: 1) an RNN with gated recurrent units (GRU) is used to extract more of the temporal features that are lost in convolutional neural networks (CNNs), and 2) the past and future information in a whole sequence is fully exploited to extract more representative features
This end-to-end learning architecture consists of parallel CNN and bidirectional recurrent neural network (Bi-RNN) blocks for feature extraction
Summary
With the extensive utilization of various mobile devices, mobile computing has attracted attention both from industry and academia [1], [2]. Two crucial components for music genre classification, feature extraction and classifier learning, may greatly. The existing music genre classification methods using CNNs are not able to model the long-term temporal information in the music spectrograms. Propose a hybrid model structure to combine the spatial features and temporal frame orders of the music samples, which consists of a CNN block and a parallel bidirectional recurrent neural network (Bi-RNN) block. The frequency-domain information is considered as images through the CNN-based deep neural networks and is more suitable for music genre classification than simple CNNs. The remainder of this paper is organized as follows.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.