Many attack paradigms against deep neural networks have been well studied, such as the backdoor attack in the training stage and the adversarial attack in the inference stage. In this article, we study a novel attack paradigm, the bit-flip based weight attack, which directly modifies weight bits of the attacked model in the deployment stage. To meet various attack scenarios, we propose a general formulation including terms to achieve effectiveness and stealthiness goals and a constraint on the number of bit-flips. Furthermore, benefitting from this extensible and flexible formulation, we present two cases with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA). SSA which aims at misclassifying a specific sample into a target class is a binary optimization with determining the state of the binary bits (0 or 1); TSA which is to misclassify the samples embedded with a specific trigger is a mixed integer programming (MIP) with flipped bits and a learnable trigger. Utilizing the latest technique in integer programming, we equivalently reformulate them as continuous optimization problems, whose approximate solutions can be effectively and efficiently obtained by the alternating direction method of multipliers (ADMM) method. Extensive experiments demonstrate the superiority of our methods.