Face recognition models and systems based on deep neural networks are vulnerable to adversarial examples. However, existing attacks on face recognition are either impractical or ineffective for black-box impersonation attacks in the physical world. In this paper, we propose EAP, an effective black-box impersonation attack method on face recognition in the physical world. EAP generates adversarial patches that can be printed by mobile and compact printers and attached to the source face to fool face recognition models and systems. To improve the transferability of adversarial patches, our approach incorporates random similarity transformations and image pyramid strategies, increasing input diversity. Furthermore, we introduce a meta-ensemble attack strategy that harnesses multiple pre-trained face models to extract common gradient features. We evaluate the effectiveness of EAP on two face datasets, using 16 state-of-the-art face recognition backbones, 9 heads, and 5 commercial systems. Moreover, we conduct physical experiments to substantiate its practicality. Our results demonstrate that EAP is capable of effectively executing impersonation attacks against state-of-the-art face recognition models and systems in both digital and physical environments.