<h3>Purpose/Objective(s)</h3> Deep learning has shown promise in many medical imaging tasks. One major obstacle in landing AI in clinic is the lack of performance assurance at test time to cope with the specific individual. Specifically, in the context of deformable image registration (DIR) problems, training and testing samples can differ in either image or motion characteristics or both, raising concerns about reliability from direct inference. This study aims to combine the efficiency and benefit of data-driven regularization from deep-learning based approaches, and the individual-centric perspective of classic optimization approach, to generate DIR solutions with high accuracy, robustness, and efficiency. <h3>Materials/Methods</h3> Our method utilizes a descriptor network to impose general feasibility prior on deformation vector fields (DVFs). The trained registration network is further adapted for each image pair at test time to optimize the individualized performance. The adaptation method was tested under various generalization domain shifts in cross-protocol, cross-platform, and cross-modality scenarios. For cross protocol, deep prior was generated by 750 CT-based DVFs, trained on 20 10-phase 4D CBCT and tested with SPARE synthesized CBCT sett; for cross platform, deep prior was generated from 100 CTA-based DVFs, trained on 10 .35T MRI, and tested on 1.5T cardiac MRI; for cross-modality, deep prior was generated by 750 CT-based DVF, trained on 02 4D CBCT, and test-time adapted for a pilot 4D lung MRI acquisition. Our method was compared against a manually tuned classic B-spline method in SimpleElastix and the network without adaptation. <h3>Results</h3> Our method achieved (2.11 +/- 1.61), (2.26 +/- 1.41), and (2.00 +/- 1.63) mm landmark-based registration errors on lung CBCT, cardiac MRI, and lung MRI, respectively. In a motion-compensated CBCT enhancement test, it achieved (102.1 +/- 7.96) HU root-mean-square error and (0.994 +/- 0.002) structural similarity index to the ground-truth CT. Our method improved the registration accuracy with respect to individual test input, with statistical significance with p value from superiority test in order of 1E-3 or less. The only exception was target registration error in midventricular and apical regions on cardiac MRI when compared against B-spline model where statistical significance can't be established. <h3>Conclusion</h3> We have demonstrated the efficacy to adjust trained registration network to unseen data acquired on a different protocol or scanner, with a different imaging platform, or even modality. The proposed paradigm enables applying deep learning registration method to atypical image presence or pilot imaging types when data is unavailable to support conventional network training. The test-time adaptation adjust optimization focus to the individual case at hand, providing confidence for clinical translation and adoption. The proposed rationale applies to other medical AI development where personalization and robustness at run time are critical.