Surgical navigation systems involve various technologies of segmentation, calibration, registration, tracking, and visualization. These systems aim to superimpose multisource information in the surgical field and provide surgeons with a composite overlay (augmented-reality) view, improving the operative precision and experience. Surgical 3-D tracking is the key to build these systems. Unfortunately, surgical 3-D tracking is still a challenge to endoscopic and robotic navigation systems and easily gets trapped in image artifacts, tissue deformation, and inaccurate positional (e.g., electromagnetic) sensor measurements. This work explores a new monocular endoscope hybrid 3-D tracking method called spatially constrained adaptive differential evolution that combines two spatial constraints with observation-recall adaptive propagation and observation-based fitness computing for stochastic optimization. Specifically, we spatially constraint inaccurate electromagnetic sensor measurements to the centerline of anatomical tubular structures to keep them physically locating inside the tubes, as well as interpolate these measurements to reduce jitter errors for smooth 3-D tracking. We then propose observation-recall adaptive propagation with fitness computing to precisely fuse the constrained sensor measurements, preoperative images, and endoscopic video sequences for accurate hybrid 3-D tracking. Additionally, we also propose a new marker-free hybrid registration strategy to precisely align positional sensor measurements to preoperative images. Our new framework was evaluated on a large amount of clinical data acquired from various surgical endoscopic procedures, with the experimental results showing that it certainly outperforms current surgical 3-D approaches. In particular, the position and rotation errors were significantly reduced from (6.55, 11.4) to (3.02 mm, 8.54 °).