Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires, or coils. In this work, we propose a deep learning-based pipeline for real-time tomographic (four-dimensional [4D]) interventional guidance at conventional doselevels. Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils, and guidewires. The pipeline is capable of reconstructing interventional tools from only four X-ray projections without the need for a patient prior. At an isotropic voxel size of 100 , our methods achieve a precision/recall within a 100 environment of the ground truth of 93%/98%, 90%/71%, and 93%/76% for guide wires, stents, and coils,respectively. A deep learning-based approach for 4D interventional guidance is able to overcome the drawbacks of today's interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy.