Ultrafast ultrasound imaging based on plane wave (PW) compounding has been proposed for use in various clinical and preclinical applications, including shear wave imaging and super resolution blood flow imaging. Because the image quality afforded by PW imaging is highly dependent on the number of PW angles used for compounding, a tradeoff between image quality and frame rate occurs. In the present study, a convolutional neural network (CNN) beamformer based on a combination of the GoogLeNet and U-Net architectures was developed to replace the conventional delay-and-sum (DAS) algorithm to obtain high-quality images at a high frame rate. RF channel data are used as the inputs for the CNN beamformers. The outputs are in-phase and quadrature data. Simulations and phantom experiments revealed that the images predicted by the CNN beamformers had higher resolution and contrast than those predicted by conventional single-angle PW imaging with the DAS approach. In in vivo studies, the contrast-to-noise ratios (CNRs) of carotid artery images predicted by the CNN beamformers using three or five PWs as ground truths were approximately 12 dB in the transverse view, considerably higher than the CNR obtained using the DAS beamformer (3.9 dB). Most tissue speckle information was retained in the in vivo images produced by the CNN beamformers. In conclusion, only a single PW at 0° was fired, but the quality of the output image was proximal to that of an image generated using three or five PW angles. In other words, the quality-frame rate tradeoff of coherence compounding could be mitigated through the use of the proposed CNN for beamforming.