INTRODUCTION: Authors present a 2-dimensional to 3-dimensional (2D-3D) registration algorithm that identifies the position of a C-arm relative to a 3D volume. METHODS: This work utilizes digitally reconstructed radiographs (DRR) which are synthetic radiographic images generated by simulating the x-ray projections as they would pass through a CT volume. To evaluate our algorithm, we used cone-beam CT data from 127 patients obtained from a de-identified registry of cervical, thoracic, and lumbar scans. We systematically evaluated and tuned a self-supervised algorithm, quantified the limitations and convergence rate of our model by simulating C-arm registrations with 80 randomly simulated DRRs for each CT volume. The endpoints for this study were the time to convergence, accuracy of convergence for each of the C-arm's degrees of freedom (DoF), and overall registration accuracy based on a voxel-by-voxel measurement. RESULTS: 10,160 unique fluoroscopic images were simulated from 127 CT scans. Our algorithm successfully converged to the correct solution 81% of the time with an average of 1.97 seconds of computation. The fluoroscopic images for which the algorithm converged to the solution achieved 99.9% registration accuracy despite only utilizing single-precision computation for speed. Our algorithm was found to be optimized for convergence when the search space was limited to a ± 45° offset in the RAO/LAO, cranial/caudal and receiver rotation angles with the radiographic isocenter contained within 8000cm3 of the volumetric center of the CT volume. CONCLUSIONS: The machine learning algorithm we present has the potential to aid surgeons with many aspects of spine surgery through an automated 2D-3D registration process. Future work will focus on algorithmic optimizations adn deep-learning to improve the convergence rate and speed profile.