In pattern recognition applications, the classification power of a system can be improved by combining several classifiers. Obviously performance of the system cannot be improved if the individual classifiers make all the same mistakes, thus it is important to use different features and different structures in the individual classifiers. In this context, we propose a two subnets neural network called CSM net. The first subnet, or similarity layer, is operating as a similarity measure neural network; it is based on the complementary similarity measure method (CSM). The second subnet is a competitive neural network (CNN) based on the winner takes all algorithm (WTA) that is used for the classification. In the proposed neural architecture, the statistical CSM method is analyzed, and implemented in the form of a feed forward neural network, it is named “similarity measure neural network” (SMNN). We show that the resulting SMNN synaptic weights are modified versions of the model patterns used in the training set, and that they can be considered as a memory network. We introduce a relative distance data calculated from the SMNN output, and we use it as a quality measurement tool of the degraded characters, what makes the SMNN classifier very powerful, and very well-suited for features rejections. This relative distance is used by the SMNN and compared to a first rejection threshold to accept, or reject, the incoming characters. In order to guarantee a higher recognition and reliability rates for the cascaded method, the SMNN is combined with a second subnet based on the WTA for classification using a second specific rejection threshold. These two submits combination (CSM net) boost the performance of the SMNN classifier. This is resulting in a robust multiple classifiers that can be used for setting the entire rejection threshold. The experimental results that we introduce are related to the proposed method, but the tests are introduced with various impulse noise levels, as well as the tests with broken and manually corrupted characters, and characters with various levels of additive Gaussian noise. The experiments show the effective ability of the model to yield relevant and robust recognition on poor quality printed checks, and show that the CSM net outperforms the previous works, both in efficiency and accuracy.