Abstract

This paper, investigating the goal of human-level synthetic intelligence, presents a novel approach to learning an associative memory model using Generalized Hough Transform (GHT) [1]. A human action detection and classification system is also constructed to verify the effectiveness of the proposed GHT-based associative memory model. Existing human action classification systems use machine learning architectures and low-level features to characterize a specific human action. However, existing machine learning architectures often lack restructuring capability, which is an important process of forming the conceptual structures in human-level synthetic intelligence. The gap between low-level features and high-level human intelligence also degrades the performance of existing human action recognition algorithms when the spatial–temporal boundaries of action objects are ambiguous. To eliminate the side effect of temporal ambiguity, the proposed system uses a preprocessing procedure to extract key-frames from a video sequence and provide a compact representation for this video. The image and motion features of patches extracted from each key-frame are collected and used to train an appearance–motion codebook. The training procedure, based on the learnt codebook and GHT, constructs a hypergraph for associative memory learning. For each key-frame of a test video clip, the Hough voting framework is also used to detect salient segments, which are further partitioned into multiple patches, by grouping blocks of similar appearance and motions. The features of the detected patches are used to query the associative memory and retrieve missing patches from key-frames to recall the whole action object. These patches are then used to locate the target action object and classify the action type simultaneously using a probabilistic Hough voting scheme. Results show that the proposed method gives good performance on several publicly available datasets in terms of detection accuracy and recognition rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call