Abstract
The emergence of IoT and advanced multimedia information systems have undoubtedly created a proliferation of video sensor data. Although diverse machine learning approaches are utilized to extract useful insights from these data, limitations occur when processing and accommodating the large volumes of video data, which are unlabeled and have previously unseen data structures. This brings out the importance of using self-structuring intelligence that can adapt to the nature of the data and with the ability to learn from multi-modal, spatiotemporal and unstructured data. Encompassing these advances, we propose a recurrent self-structuring machine learning approach for video processing using multi-stream hierarchical recurrent growing self-organizing maps (RGSOM) architecture. We have designed, implemented and evaluated the said approach using a human activity recognition video dataset (Weizmann dataset), achieving state-of-the-art accuracy of 93.5% in the unsupervised domain. We used both spatial and temporal data from the video as separate input feature streams, where RGSOMs were used to self-structure the video data in multi-streams for visual exploratory analysis and video classification. As potential implications, this study can contribute to the existing literature in advancing self-adaptation techniques for video sensor data processing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.