Cone beam computed tomography (CBCT) is used extensively in image-guided surgery and radiotherapy, but it induces ionizing radiation to the patients. Sparse-view CBCT is a main method to lower the radiation dose; however, it introduces streak artifacts in the reconstructed images. We develop a dual convolutional neural network architecture (DualCNN) to eliminate streak artifacts from sparse-view CBCT images. In the first part, we develop an interpolation CNN in the projection domain to restore the full-view projections from sparse-view projections. The restored full-view projections are then input to the Feldkamp–Davis–Kress algorithm for reconstructing the CBCT images. In the second part, we develop an image domain CNN to further improve the quality of the CBCT images. DualCNN is evaluated using real CBCT X-ray projection data of walnuts. Experimental results show that, DualCNN reconstructs good CT images with only a quarter number of full-view projections, and it achieves significantly higher performance than other representative methods in terms of qualitative and quantitative evaluations. DualCNN achieves a mean root-mean-square error of 0.0369, a mean peak-signal-to-noise ratio of 26.93 dB and a mean structural similarity of 0.732 in 3800 reconstructed images. Therefore, our DualCNN can significantly lower the CBCT radiation dose while maintaining good quality of reconstructed images.