Abstract

Hierarchical model and X (HMAX), which is a feedforward network, has displayed profoundly satisfying performance for object recognition tasks in comparison with other state-of-the-art machine vision algorithms. Nevertheless, the standard HMAX model has two major drawbacks. The first one is the computational cost of the S2 layer. The second one is random patch selection of HMAX model, which leads to low performance as meaningless and redundant patches are extracted. In this paper, a faster and more accurate HMAX model in combination with scale-invariant feature transform algorithm is proposed to improve mentioned weaknesses. Our proposed model consists of two levels of improvement. The first level is increasing the speed of matching in S2 layer by comparing the extracted patches with only a few informative patches rather than the whole image. The second one is related to the performance improvement by extracting the discriminative and distinctive patches in the training stage. The obtained results prove that the proposed model performs classification tasks faster than both the standard HMAX model and the binary-based HMAX model (B-HMAX). Meanwhile, the performance for the proposed model stays almost as high as that of the B-HMAX model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call