Recovering a low-rank matrix and a sparse matrix from an observed matrix, known as sparse and low-rank decomposition (SLRD), is becoming a hot topic in recent years. The most popular model for SLRD is to use the ℓ1 norm and nuclear norm for the sparse and low-rank approximation. Since this convex model has certain limitations, various nonconvex models have been explored and found to be very promising. In this paper, we introduce a generalized nonconvex nonsmooth model for SLRD which covers a wide range of nonconvex surrogate functions that are continuous, concave and monotonically increasing on [0,∞) to approximate both the ℓ0 norm and the rank function, such as ℓp norm (0<p<1), Logarithm, Geman, SCAD and MCP functions. The choice of the nonconvex surrogates for the sparse and low-rank components can be different. Due to the nonconvexity and extensive options of the surrogates, the optimization problem is untractable. Based on the majorization-minimization (MM) algorithm, we propose a unified framework named MM-ADMM algorithm to solve this problem, which can be applied to all eligible surrogates as long as their supergradients are available. The constrained majorizing problems established under the MM framework can be easily solved by the alternating direction method of multipliers (ADMM). The theoretical convergence properties are investigated and proved, including the convergence of the sequence of objective function values generated by the designed algorithm and a weak convergence result related to the inner ADMM-iterations. Experiments on the synthetic data and real-world applications demonstrate the effectiveness of our designed MM-ADMM algorithm.