Guided by the neural network (NN), NN-based blind image quality assessment (BIQA) methods have improved image quality prediction accuracy. However, the training process of NN will consume a lot of time and training data. To solve the above problems, an efficient NN structure, which is called fast quality assessment network (FQA-Net), is proposed for BIQA. FQA-Net is a very simple NN, which mainly includes a convolution layer, a standard deviation measurement layer, and a regression layer. The convolution kernels in the convolution layer are a set of visual filters with characteristics of visual neurons, which are obtained by training on natural image samples. The output of the convolutional layer is a set of feature maps. Then the standard deviation of each feature map is calculated directly. Finally, a regression model is used for the mapping between the standard deviation values and the quality scores. FQA-Net not only reduces the number of parameters and the output dimensions in the training process, but also prevents network overfitting effectively. The results on seven image databases (i.e., LIVE, CSIQ, TID2013, LIVEMD, KADID-10K, LIVEC, and KonIQ-10K) show that FQA-Net has relatively low computational complexity and high computational speed and accuracy compared with other leading BIQA methods, especially when the number of training samples is very low. Moreover, the FQA-Net can provide stable prediction results across different distortion types.