A deep learning (DL) model has been developed to estimate patient-lens dose in real-time for given exposure and geometric conditions during fluoroscopically-guided neuro-interventional procedures. Parameters input into the DL model for dose prediction include the patient head shift from isocenter and cephalometric landmark locations as a surrogate for head size. Machine learning (ML) models were investigated to automatically detect these parameters from the in-procedure fluoroscopic image. Fluoroscopic images of a Kyoto Kagaku anthropomorphic head phantom were taken at various known X (transverse) and Y (longitudinal) shifts, as well as different magnification modes, to create an image database. For each image, anatomical landmark coordinate locations were obtained manually using ImageJ and are used as ground-truth labels for training. This database was then used to train the two separate ML models. One ML model predicts the patient head shift in both the X and Y directions and the other model predicts the coordinates of the anatomical landmarks. From the coordinates, the distance between these anatomical landmarks is calculated, and input into the DL dose-prediction model. Model performance was evaluated using mean absolute error (MAE) and mean absolute percentage error (MAPE) for the head-shift and landmark-coordinate models, respectively. The goal is to implement these two separate models into the Dose Tracking System (DTS) developed by our group. This would allow the DTS to automatically detect the patient head size and position for eye-lens dose prediction and eliminate the need for manual input by the clinical staff.