Face is one of the most important factor to communicate with people. Therefore, many face recognition systems have been developed. These systems use images captured by a camera to recognize a person. However, facial images are various due to some external factors, such as the position of the body, luminous surroundings, facial expressions, and so on. These variations make recognition accuracy worse, therefore face recognition systems have been required to overcome this problem. Three-dimensional face data can solve this problem, but three-dimensional measuring systems are expensive. We propose a method that estimates three-dimensional face data from two-dimensional image which is captured by a camera. This method uses an artificial neural networks which learned the relations between two-dimensional facial images and three-dimensional facial data. We generate these data measured by a CCD camera and a laser range finder for teaching the artificial neural networks. The experimental results show that the proposed method is effective.