Abstract

In this study, a framework for network-based representation of time series is presented. In the proposed method, initially, a segmentation procedure is completed by dividing the signals in the time domain into fixed-width time windows with 50% overlap. Each segment is normalized based on the range defined by the absolute maximum amplitude value of the main signal and its negative counterpart, and the normalized signals are quantized to 2^n levels. This transformation, proceeding through 3 channels expressed by 3 different jump values, generates a vertical RGB image representation by combining the channels in layers. As a result of tiling these vertical RGB images from each time window horizontally, a time-graph representation called VarioGram is obtained, where the horizontal axis represents time, and the vertical axis represents signal fluctuations. Feeding a ResNet model with VarioGram representations obtained by the transformation of the audio signals in the ESC-10 dataset which is frequently used in environmental sound classification problems, a classification success of 82.08% has been obtained, while this success has been 93.33% with the VarioGram representations hybridized with mel-spectrogram images. The VarioGram representations therefore acted to slightly improve the highest classification success achievable with the mel-spectrogram alone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call