Universal adversarial perturbation (UAP) exhibits universality as it is independent of specific images. Although previous investigations have shown that the classification of natural images is susceptible to universal adversarial attacks, the impact of UAP on face recognition has not been fully investigated. Thus, in this paper we assess the vulnerability of face recognition for UAP. We propose FaUAP-FBF, which exploits the frequency domain by learning high, middle, and low band filters as an additional dimension of refining facial UAP. The facial UAP and filters are alternately and repeatedly learned from a training set. Furthermore, we convert non-target attacks to target attacks by customizing a target example, which is an out-of-distribution sample for a training set. Accordingly, non-target and target attacks form a uniform target attack. Finally, the variance of cosine similarity is incorporated into the adversarial loss, thereby enhancing the attacking capability. Extensive experiments on LFW and CASIA-WebFace datasets show that FaUAP-FBF has a higher fooling rate and better objective stealthiness metrics across the evaluated network structures compared to existing universal adversarial attacks, which confirms the effectiveness of the proposed FaUAP-FBF. Our results also imply that UAP poses a real threat for face recognition systems and should be taken seriously when face recognition systems are being designed.