Abstract

With the development of society, deep learning has been widely used in object detection, face recognition, speech recognition, and other fields. Among them, object detection is a popular direction in computer vision and digital image processing, and face detection is a focus of this hot direction. Although face detection technology has gone through a long research stage, it is still considered as one of the more difficult subjects in human feature detection technology. In addition, the face detection technology itself has two sides, imperceptibility and complexity of the environment, and other defects cause the existing technology to be unable to accurately recognize faces of different proportions, obscured and different postures. Therefore, this paper adopts an advanced deep learning method based on machine vision to detect human faces automatically. In order to accurately detect a variety of human faces, a multiscale fast RCNN method based on upper and lower layers (UPL-RCNN) is proposed. The network is composed of spatial affine transformation components and feature region components (ROI). This method plays a vital role in face detection. First of all, multiscale information can be grouped in detection, so as to deal with small areas of the face. Then, the method can use the inspiration of the human visual system to perform contextual reasoning and spatial transformation, including zooming, cutting, and rotating. Through comparative experiments, the analysis results show that this method can not only accurately detect human faces but also has better performance than fast RCNN. Compared with some advanced methods, this method has the advantages of high accuracy, less time consumption, and no correlation mark.

Highlights

  • At present, face detection technology has been widely used in many fields such as security, campus, and finance [1]

  • It is a difficult problem to distinguish human faces from other objects in a complex background image, and due to changes in the proportions, poses, facial expressions, lighting, image quality, age, and occlusion of the face, face detection becomes more difficult, as shown in Figure 1. erefore, in order to complete the robustness of the detection method, the designed detection algorithm must consider the possible interference caused by various complex backgrounds of the face [4]

  • Experimental Results and Analysis of the WIDER FACE Data Set. is article compares experiments with FasterRCNN, Two-stage CNN, Single Shot Detector, R-FCN, Hyper Face, and Aggregate Channel Features (ACF) models to prove the effectiveness of this model

Read more

Summary

Introduction

Face detection technology has been widely used in many fields such as security, campus, and finance [1]. Erefore, in order to complete the robustness of the detection method, the designed detection algorithm must consider the possible interference caused by various complex backgrounds of the face [4]. Faster R-CNN is a method based on the fast regional convolutional network. It can effectively improve the detection efficiency and accuracy by using the deep convolutional network to effectively extract and classify the object to be detected [5]. Compared with traditional face detection technology, Faster R-CNN adopts the technology of region of interest pooling (ROI pooling), so that the network can share the calculation results, thereby speeding up the model [6]. Compared with traditional face detection technology, Faster R-CNN adopts the technology of region of interest pooling (ROI pooling), so that the network can share the calculation results, thereby speeding up the model [6]. e traditional CNN structure can maintain a certain degree of translation and rotation

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.