This paper presents a novel neural network architecture, analysis–adjustment–synthesis network (AASN), and tests its efficiency and accuracy in modelling non-linear function and classification. The AASN is a composite of three sub-networks: analysis sub-network; adjustment sub-network; and synthesis sub-network. The analysis sub-network is a one-layered network that spreads the input values into a layer of ‘spread input neurons’. This synthesis sub-network is a one-layered network that spreads the output values back into a layer of ‘spread output neurons’. The adjustment sub-network, between the analysis sub-network and the synthesis sub-network, is a standard multi-layered network that operates as the learning mechanism. After training the adjustment sub-network in recalling phase, the synthesis sub-network receives the output values of spread output neurons and synthesizes them into output values with a weighted-average computation. The weights in the weighted-average computation are deduced from the method of Lagrange multipliers. The approach is tested using four function mapping problems and one classification problem. The results show that combining the analysis sub-network and the synthesis sub-network with a multi-layered network can significantly improve a network's efficiency and accuracy.