Research into the prevention of driver inattention by detecting the driver gaze has become more vital, as traffic accidents due to driver inattention have increased. In a vehicle environment, the conventional gaze-detection methods include detecting the driver gaze using single or multiple cameras. When a single camera is used to detect the driver gaze, excessive rotation of the driver's head may prevent the eye region from being accurately detected, thereby reducing the gaze-detection accuracy. To address this issue, researchers previously attempted gaze detection using dual cameras. However, these methods selectively use the information obtained from each camera; thus, accuracy improvement is limited because the information is not simultaneously used. In addition, the processing complexity increases when images obtained from dual cameras are simultaneously processed. Accordingly, this paper proposes a method to detect the driver's gaze position in the vehicle. This is the first study to calculate the driver gaze via a deep convolutional neural network (CNN) that simultaneously uses image information acquired from the dual near-infrared light cameras. Previous research selectively used one of the images acquired from the dual cameras, and the existing CNN-based gaze-detection methods use multiple deep CNNs for the driver eyes and facial images. However, the proposed method uses one CNN model that integrates all information acquired from the dual cameras into one three-channel image and uses it as an input for the network, thereby increasing the recognition reliability and reducing the computational cost. We conducted experiments based on a self-built driver database that comprised the images from 26 participants (Dongguk dual-camera-based gaze database) and the Columbia gaze dataset, which is an open database. The results demonstrate that the proposed method shows better performance than the existing methods.