Ultrasound imaging of the tongue can provide detailed articulatory information addressing a variety of phonetic questions. However, using this method often requires the time-consuming process of manually labeling tongue contours in noisy images. This study presents a method for the automatic identification and extraction of tongue contours using convolutional neural networks, a machine learning algorithm that has been shown to be highly successful in many biomedical image segmentation tasks. We have adopted the U-net architecture (Ronneberger, Fischer, & Brox 2015, U-Net: Convolutional Networks for Biomedical Image Segmentation, DOI:10.1007/978-3-319-24574-428), which learns from human-annotated splines using multiple, repeated convolution and max-pooling layers in the network for feature extraction, as well as deconvolution layers for generating spatially precise predictions of the tongue contours. Trained using a preliminary dataset of 8881 human-labeled tongue images from three speakers, our model generates discrete tongue splines comparable to those identified by human annotators (Dice Similarity Coefficient = 0.71). Although work is ongoing, this neural network based method shows considerable promise for the post-processing of ultrasound images in phonetic research. [Work supported by NSF grant BCS-1348150 to P.S. Beddor and A.W. Coetzee.]