In many inertial confinement fusion (ICF) experiments, the neutron yield and other parameters cannot be completely accounted for with one and two dimensional models. This discrepancy suggests that there are three dimensional effects that may be significant. Sources of these effects include defects in the shells and defects in shell interfaces, the fill tube of the capsule, and the joint feature in double shell targets. Due to their ability to penetrate materials, x rays are used to capture the internal structure of objects. Methods such as computational tomography use x-ray radiographs from hundreds of projections, in order to reconstruct a three dimensional model of the object. In experimental environments, such as the National Ignition Facility and Omega-60, the availability of these views is scarce, and in many cases only consists of a single line of sight. Mathematical reconstruction of a 3D object from sparse views is an ill-posed inverse problem. These types of problems are typically solved by utilizing prior information. Neural networks have been used for the task of 3D reconstruction as they are capable of encoding and leveraging this prior information. We utilize half a dozen, different convolutional neural networks to produce different 3D representations of ICF implosions from the experimental data. Deep supervision is utilized to train a neural network to produce high-resolution reconstructions. These representations are used to track 3D features of the capsules, such as the ablator, inner shell, and the joint between shell hemispheres. Machine learning, supplemented by different priors, is a promising method for 3D reconstructions in ICF and x-ray radiography, in general.