The architecture of multilayer perceptron (MLP) neural networks dictates the network's performance. However, aiming at the specific classification problems, suitable architectures of MLPs must be determined beforehand. More often, the design decisions rely on a trial and error learning and the experience knowledge. To automatically design an MLP's network architecture and optimize network parameters, a multiobjective bilevel programming model is built. In this model, a multiobjective optimization problem is constructed in the upper level for obtaining a set of Pareto optimal architectures of the MLPs, considering network complexity, training error rate, and validation error rate, while a single-objective optimization problem is established in the lower level to search for the optimum network parameters for a given network architecture. For dealing with this model efficiently, a novel multiobjective hierarchical learning algorithm (MOHLA) is proposed, in which an integer-coding NSGA-II is developed as the upper-level optimizer for a set of Pareto optimal network structures of the MLPs, while a non-iterative method is regarded as the lower-level solver for the MLPs' connection parameters. After a set of trained MLPs is obtained finally by using MOHLA, a selective ensemble strategy is adopted for improving identification accuracy. Three types of multiobjective bilevel programming models are investigated and compared in the experiments. Moreover, the proposed MOHLA is compared with several state-of-the-art learning approaches on various classification problems. The experimental results confirm that MOHLA performs well.