This paper presents a novel and effective approach for establishing a quantified output sensitivity of Binary Feedforward Neural Networks to weight and input perturbations. Firstly, analytical formulae are derived for computing a neuron׳s sensitivity by means of matrix and probability theories. Then, based on the neuron׳s sensitivity and the network׳s architecture feature, a bottom-up strategy is followed to compute the entire network׳s sensitivity. The proposed approach has the obvious advantages of higher generality, lower computational complexity, and yet much higher accuracy. Experimental results verify the correctness and effectiveness of the approach.