Current minimally invasive techniques for beating heart surgery are associated with three major limitations: the shortage of realistic and safe training methods, the process of selecting port locations for optimal target coverage from X-rays and angiograms, and the sole use of the endoscope for instrument navigation in a dynamic and confined 3D environment. To supplement the current surgery training, planning and guidance methods, we continue to develop our Virtual Cardiac Surgery Planning environment (VCSP) – a virtual reality, patient-specific, thoracic cavity model derived from 3D pre-procedural images. In this work, we create and validate dynamic models of the heart and its components. A static model is first generated by segmenting one of the image frames in a given 4D data set. The dynamics of this model are then extracted from the remaining image frames using a non-linear, intensity-based registration algorithm with a choice of six different similarity metrics. The algorithm is validated on an artificial CT image set created using an excised porcine heart, on CT images of canine subjects, and on MR images of human volunteers. We found that with the appropriate choice of similarity metric, our algorithm extracts the motion of the epicardial surface in CT images, or of the myocardium, right atrium, right ventricle, aorta, left atrium, pulmonary arteries, vena cava and epicardial surface in MR images, with a root mean square error in the 1 mm range. These results indicate that our method of modeling the motion of the heart is easily adaptable and sufficiently accurate to meet the requirements for reliable cardiac surgery training, planning, and guidance.