Articles published on Image-guided surgery
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2005 Search results
Sort by Recency
- New
- Research Article
- 10.3390/bios16020090
- Feb 1, 2026
- Biosensors
- Anjun Zhu + 12 more
Fluorescence imaging is crucial for providing detailed information in clinical practice. However, traditional first near-infrared (NIR-I) dyes such as indocyanine green (ICG) exhibit limitations such as shallow penetration depth, low contrast, and suboptimal clarity due to light scattering and autofluorescence. To overcome these drawbacks, we utilized a novel amphiphilic second near-infrared (NIR-II) aggregation-induced emission (AIE) probe (TCP) with an emission range beyond 1300 nm (NIR-IIa). Using approximately 200 co-registered NIR-I/NIR-IIa image pairs acquired with TCP, we trained a SwinUnet-based deep learning model to transform low-quality NIR-I ICG images into high-resolution NIR-IIa-like images. Owing to its superior brightness and photostability, TCP enhances in vivo fluorescent angiography, offering clearer vascular details and a higher signal-to-background ratio (SBR) in the NIR-IIa region, 2.6-fold higher than that of ICG in the NIR-I region. The deep learning model successfully converted blurred NIR-I images into high-SBR NIR-IIa-like images, achieving rapid imaging speeds without compromising quality. This work introduces a synergistic “probe-plus-AI” paradigm that substantially improves both the quality and speed of clinical fluorescence imaging, providing a pathway that is immediately translatable to enhanced diagnostics and image-guided surgery.
- New
- Research Article
- 10.1016/j.jsurg.2025.103821
- Feb 1, 2026
- Journal of surgical education
- Bin Zheng + 3 more
Training Surgeons' Visual Scanning Pattern in Laparoscopic Surgery to Enhance Patient Safety.
- New
- Research Article
2
- 10.1016/j.biomaterials.2025.123549
- Feb 1, 2026
- Biomaterials
- Da-Yong Hou + 12 more
Translational contrast agents for use in fluorescence image-guided tumor surgery.
- New
- Research Article
- 10.3390/jcm15030937
- Jan 23, 2026
- Journal of Clinical Medicine
- Gonzalo Ruiz-De-Leon + 14 more
Background: Orbital dermoid cysts are common benign lesions; however, deep-seated or recurrent lesions near the orbital apex pose major surgical challenges due to their proximity to critical neurovascular structures. Lateral orbitotomy remains the reference approach, but accurate osteotomies and stable reconstruction can be difficult to achieve using conventional techniques. This study reports our initial experience using a fully digital, hospital-based point-of-care (POC) workflow to enhance precision and safety in complex orbital dermoid cyst surgery. Methods: We present a case series of three patients with orbital dermoid cysts treated at a tertiary center (2024–2025) using a comprehensive digital workflow. Preoperative assessment included CT and/or MRI followed by virtual surgical planning (VSP) with orbit–tumor segmentation and 3D modeling. Cutting guides and patient-specific implants (PSIs) were manufactured in-house under a certified hospital-based POC protocol. Surgical strategies were tailored to each lesion and included piezoelectric osteotomy, intraoperative navigation, intraoperative CT, and structured-light scanning when indicated. Results: Complete en bloc resection was achieved in all cases without capsular rupture or optic nerve injury. Intraoperative CT confirmed complete lesion removal and accurate PSI positioning and fitting. Structured-light scanning enabled radiation-free postoperative monitoring when used. All patients preserved full ocular motility, visual acuity, and facial symmetry, with no complications or recurrences during follow-up. Conclusions: The integration of VSP, in-house POC manufacturing, and image-guided surgery within a lateral orbitotomy approach provides a reproducible and fully integrated workflow. This strategy appears to improve surgical precision and safety while supporting optimal long-term functional and aesthetic outcomes in challenging orbital dermoid cyst cases.
- New
- Research Article
- 10.1021/acs.analchem.5c07102
- Jan 23, 2026
- Analytical chemistry
- Xiaohui Wang + 7 more
Hepatocellular carcinoma (HCC) poses a significant threat to global health, with postoperative survival often compromised by high recurrence rates due to undetectable occult metastases, thereby highlighting the urgent need for early diagnosis and precise intraoperative guidance. The cellular-mesenchymal epithelial transition factor (c-Met), a transmembrane receptor overexpressed in numerous malignancies including HCC, represents a compelling biomarker for cancer diagnosis and therapy. Herein, we reported a near-infrared fluorescence (NIRF) probe, GM-7-MPA, constructed by conjugating a specifically screened high-affinity peptide ligand (GM-7) with a hydrophilic fluorescent dye (MPA). The GM-7-MPA demonstrated high specificity and strong binding affinity for c-Met positive HCC cells in vitro, outperforming GE-137-MPA, which has been clinically evaluated for the detection of colorectal polyps. Furthermore, across various tumor-bearing mouse models, including subcutaneous xenograft, orthotopic liver cancer, and HCC pulmonary metastasis models, GM-7-MPA clearly visualized tumor lesions with a high tumor-to-background ratio (TBR). Critically, in fluorescence-guided surgical navigation studies, the probe accurately delineated tumor margins from adjacent normal tissues and effectively identified residual microfoci, thereby facilitating the complete resection of malignant tumors in xenograft models and orthotopic settings. These findings identified that GM-7-MPA was a promising candidate for the diagnosis and surgical navigation of c-Met positive HCC, demonstrating significant translational potential and clinical application prospects.
- New
- Research Article
- 10.3892/mi.2026.297
- Jan 15, 2026
- Medicine international
- Hanisha Kukunoor + 10 more
Metastatic cancer remains a significant global health challenge, contributing to the majority of cancer-related mortality due to late detection, therapeutic resistance and the complexity of disseminated disease. Recent advances in artificial intelligence (AI) and augmented reality (AR) are transforming the landscape of metastatic cancer detection and management. AI-driven tools, including radiomics, deep learning models, and predictive analytics, enhance early identification of metastatic lesions, improve diagnostic accuracy, and support personalized treatment strategies by integrating multimodal clinical, imaging and molecular data. At the same time, AR technologies are increasingly applied in image-guided surgery, real-time tumor visualization and patient education, enabling more precise interventions and improved clinical decision-making. The combined use of AI and AR fosters multidisciplinary collaboration, facilitates comprehensive treatment planning, and may ultimately improve patient outcomes. However, despite these advancements, several challenges limit widespread implementation, including algorithmic bias, variability in data quality, concerns regarding patient privacy, and regulatory and ethical constraints. Furthermore, integration into clinical workflows requires robust validation, clinician training, and standardized guidelines. Future efforts are required to focus on developing transparent, generalizable AI models, strengthening data-security frameworks, and enhancing AR usability to ensure equitable, safe, and effective incorporation of these emerging technologies into metastatic cancer care.
- New
- Research Article
- 10.1001/jamanetworkopen.2025.51734
- Jan 13, 2026
- JAMA Network Open
- Feredun Azari + 10 more
Over 1 million pulmonary nodules are discovered each year in the US, and many of these undergo molecular imaging-guided surgery to obtain a diagnosis. Locating a small nodule and determining its malignant potential is technically challenging and is prone to human error. To demonstrate use of a machine learning (ML) algorithm with molecular imaging to analyze imaging data during lung cancer surgery to determine malignant potential of nodules. Data were retrospectively analyzed from a prospectively collected database. Between 2014 and 2021, patients at the hospital of the University of Pennsylvania with lung nodules were included in the study. Patients in the model development set were randomly allocated into training and validation sets in an 8:2 ratio. Data were analyzed from January 2014 and December 2021. Algorithmic tumor to background ratio (TBR) detection was implemented for individual images using Image Processing Toolkit. Developed nomogram and artificial intelligence (AI) image analyzer were combined as an optical biopsy algorithm and tested prospectively between 2021 and 2024. A total of 322 patients with lung nodules were included in the study, of whom 279 had complete clinical data for data analysis (175 [62.7%] female). The nomograms and image segmentation technology were developed using a large database of IMI videos (1014 video sequences) and demonstrated an area under the curve of 0.865 to 0.893 for malignant nodule assessment. On multivariate logistic regression analysis, patient smoking history of greater than 5 pack-years (patient pack-years [PPY] >5), ex vivo back table TBR greater than 2.0, ex vivo bisected tumor lesions TBR greater than 2.4, and in situ (inside the chest) fluorescence were found to have statistically significant associations with malignancy on final pathology. Prospective testing in an independent set of 61 consecutive patients during IMI-guided cancer surgery demonstrated a sensitivity of 93.8%, specificity of 100%, positive predictive value of 100%, and negative predictive value of 71%. The study algorithm determined malignant potential of the lesion in less than 2 minutes (mean [SD], 1.8 [0.17] minutes) compared with a mean (SD) of 34 (11) minutes with frozen section analysis. In this cohort study of patients with indeterminate lung nodules, intraoperative imaging data analyzed by AI accurately determined if a nodule was malignant. This has the potential to improve the diagnostic challenges that occur at the time of surgery.
- New
- Research Article
- 10.1007/s00259-025-07724-y
- Jan 10, 2026
- European journal of nuclear medicine and molecular imaging
- Mick M Welling + 4 more
Multimodal imaging using hybrid imaging agents is a promising strategy for diagnosing and evaluating pathologies after image-guided surgical interventions. Combining optical and radioactive imaging techniques provides a comprehensive approach to monitoring and diagnosing infections, which would be more effective than routine nuclear clinical tracers for SPECT or PET imaging, thereby enabling more effective treatment as in image-guided surgery. This review summarizes the latest developments in hybrid imaging agents and vectors for radioactive and optical imaging of bacterial, fungal, and viral infections. We pinpoint the pitfalls in the current preclinical landscape for developing infection imaging tracers. Besides diagnosing and tracking pathogens, the role of optical imaging in diagnosing and aiding antimicrobial interventions, including image-guided surgery, is discussed. Finally, practical considerations are addressed for multimodal workflow challenges in preclinical infection imaging with hybrid tracers.
- Research Article
- 10.1002/advs.202519223
- Jan 4, 2026
- Advanced science (Weinheim, Baden-Wurttemberg, Germany)
- Haohao Yan + 4 more
The nonspecific activation of activatable probes presents significant challenges in their applications for accurate cancer detection, leading to false signals in normal tissues and the potential oversight of microlesions. To address this issue, we developed a glutathione (GSH)-activatable magnetic resonance imaging (MRI) and near-infrared II (NIR-II) fluorescent probe (GAP9) using a redox capacity engineering strategy. By systematically adjusting the reaction pH during probe synthesis, we could precisely modulate its oxidation capacity to ensure that the activation window of the probe precisely matched tumor GSH concentrations. This strategy ensures that GAP9 remains in the "OFF" state within normal tissues through dual MRI/NIR-II quenching mechanisms, minimizing false-positive signals and background noise. Upon reaching tumor sites, GAP9 undergoes GSH-triggered disassembly, rapidly activating T1-weighted MRI for preoperative tumor mapping and unlocking NIR-II fluorescence for real-time intraoperative tumor delineation. This tumor-adaptable strategy enables the specific localization of microtumor lesions, intraoperative margin monitoring, and complete excision of ultrasmall residual foci ≤1mm, achieving a 96% detection rate in a mouse model of peritoneal metastasis. This study presents a novel paradigm in molecular probe design, emphasizing the potential of integrating programmable redox chemistry with tumor-specific characteristics to enhance detection accuracy, ultimately improving surgical outcomes and patient prognoses.
- Research Article
- 10.1016/j.fmre.2026.01.011
- Jan 1, 2026
- Fundamental Research
- Jingyao Wang + 5 more
Clinical fluorescent contrast agents for image-guided surgery
- Research Article
- 10.1002/alr.70000
- Jan 1, 2026
- International forum of allergy & rhinology
- Jeremy Ruthberg + 7 more
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27±0.076mm and a 95th percentile Hausdorff distance of 0.82±0.165mm, indicating sub-millimeter surface alignment. The DSC was 0.93±0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm3 with 95% limits of agreement of 0.04‒0.76 cm3 indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.
- Research Article
- 10.1016/j.biomaterials.2025.123951
- Dec 30, 2025
- Biomaterials
- Shengjie Ma + 7 more
Renal-clearable organic NIR-II dye cluster for non-invasive ureteral imaging.
- Research Article
- 10.1038/s41598-025-33157-6
- Dec 23, 2025
- Scientific Reports
- Sunam Mander + 5 more
Patients with ductal carcinoma in situ (DCIS), the earliest form of breast cancer, are treated with breast-conserving surgery (BCS) when feasible. The primary objective of BCS is to completely resect a lesion with a tumor-free margin in a single surgery, as positive margins are a risk factor for local recurrence. However, re-excision rates due to positive margins are high. Since reported positive margin rates after BCS for DCIS and invasive breast cancer are 20–81% and 15–47%, respectively, new intraoperative intervention for BCS represents a medical unmet need to achieve adequate resection margins. We previously established a new near-infrared (NIR) fluorescence imaging probe ICG-p28 by utilizing indocyanine green (ICG) labeled with cell-penetrating peptide carrying a tumor-targeting motif, and demonstrated that our imaging approach accurately identified the tumor margins in invasive breast cancer animal models. We hypothesized that our imaging approach can be applied to DCIS and yield similar rates when compared with invasive breast cancer. Here, we report that the real-time imaging of DCIS mouse models using ICG-p28 showed significant improvement in tumor recurrence rates by clear tumor margin identification compared to a control agent, ICG alone. With the chemical and biological characteristics of ICG-p28, our promising approach holds translational potential for image-guided DCIS surgery, reducing re-excision and tumor recurrence rates through tumor margin identification.
- Research Article
- 10.1097/js9.0000000000004442
- Dec 17, 2025
- International journal of surgery (London, England)
- Nayeon Choi + 2 more
Image-guided surgery (IGS) using biological markers enhances tumor eradication while preserving normal tissues. However, defining optimal imaging targets is challenging due to phenotypic changes that occur during tumor evolution or treatment. This study proposes a new surgical approach involving preoperative tumor analysis followed by individualized IGS and assesses its significance in surgical oncology. Using in-vivo tumor models (FADU and A253 tumors), preoperative tumor analysis was performed to assess the relative expression of various markers using immunohistochemistry. The markers with the highest signal-to-background ratios (SBRs) were selected for molecular imaging. The results of individual tumor biology-based imaging were compared to tumor pathology, and the accuracy of imaging was evaluated using a lattice grid segmentation method. In addition to treatment-naïve tumors, we simulated two in-vivo tumor models in the post-radiation and post-chemotherapy settings and investigated the diagnostic ability of individual tumor biology-based molecular imaging in these settings. Finally, the surgical outcomes of IGS were assessed. In treatment-naïve tumors, integrin α5β3 and CA19-9 for FADU tumors and CEA and GLUT1 for A253 tumors had the best SBRs. Tumor imaging using these markers predicted tumor boundaries with a high concordance with those on tumor pathology. In the post-radiation and post-chemotherapy settings, different markers exhibited high SBRs in tumor analysis, indicating that tumor phenotypes changed after treatment. Similarly, tumor imaging using these selected markers improved the anatomical delineation of tumors. Surgical completeness and outcomes (recurrence) were significantly better in the individual tumor biology-based IGS group compared to the conventional (unaided) surgery group. This proof-of-concept study demonstrates that tumor biology-based molecular imaging is feasible and can accurately delineate tumor boundaries. Preoperative analysis allows for the selection of optimal imaging targets with high SBRs, potentially enhancing surgical precision. However, its generalizability and clinical relevance require validation across multiple tumor models with sufficient sample size, extended follow-up, and clinical studies.
- Research Article
- 10.20517/ais.2025.92
- Dec 16, 2025
- Artificial Intelligence Surgery
- Omar Kasimieh + 10 more
Aim: Real-time image guidance using deep learning is being increasingly used in surgery. This systematic review aims to characterize intraoperative systems, mapping applications, performance and latency, validation practices, and the reported effects on workflow and patient-relevant outcomes. Methods: A systematic review was conducted on PubMed, Embase, Scopus, ScienceDirect, IEEE Xplore, Google Scholar, and Directory of Open Access Journals from December 31, 2024. Eligible English-language, peer-reviewed diagnostic accuracy, cohort, quasi-experimental, or randomized studies (2017-2024) evaluated the learning for real-time intraoperative guidance. Two reviewers screened, applied the Joanna Briggs Institute checklists, and extracted the design, modality, architecture, training, validation, performance, and latency. Heterogeneity precluded the meta-analysis. Results: Twenty-seven studies spanning laparoscopic, neurosurgical, breast, colorectal, cardiac, and other workflows met the criteria. The modalities included red-green-blue laparoscopy or endoscopy, ultrasound, optical coherence tomography, cone-beam computed tomography, and stimulated Raman histology. The architectures were mainly convolutional neural networks with frequent transfer learning. Reported performance was high, with classification accuracy commonly 90%-97% and segmentation Dice or intersection over union up to 0.95 at operating-room-compatible speeds of about 20-300 frames per second or sub-second per-frame latency; volumetric pipelines sometimes required up to 1 min. Several systems demonstrated intraoperative feasibility and high surgeon acceptance, yet fewer than one quarter reported external validation and only a small subset linked outputs to patient-important outcomes. Conclusion: Deep-learning systems for real-time image guidance exhibit strong technical performance and emerging workflow benefits. Priorities include multicenter prospective evaluations, standardized reporting of latency and external validation, rigorous human factors assessment, and open benchmarking to demonstrate generalizability and patient impact.
- Research Article
- 10.1007/s00464-025-12461-2
- Dec 9, 2025
- Surgical endoscopy
- Raphael Kwok + 7 more
Image-guided surgery has unique depth perception challenges. This complicates procedures requiring intracorporeal measurements, including gastric bypass, where conventional methods are subjective. Computer vision (CV) has been used for tool identification, which can locate key features for a mathematics-based prediction of 3D distance. This feasibility study aims to develop such a CV tool to objectively measure intraoperative distances. Development of the proof-of-concept digital ruler involved developing a CV instrument detection algorithm, and a computer program to compute and display inter-grasper distance. These were then combined and validated. The CV algorithm was trained by annotating laparoscopic surgery videos to identify the jaw assembly. Model performance was tested against ground truth annotations. The computer program was then developed and tested with manual annotations in a bench-box simulator, using a ruler for ground truth. Both components were combined in a prototype for beta-testing and validation in simulation setting, using a bench box and surgery video recordings. Bench box validation compared pipeline and human predictions to actual measured lengths of simulated bowel. Video validation compared pipeline predictions to those shown by an intracorporeal ruler. A total of 1205 frames (64 cases) were annotated. The model was trained using a 60/20/20 training/testing/validation split. Compared to annotations, the model had a Precision Recall AUC, accuracy, and Dice Score of 0.89, 0.99, and 0.80, respectively. Forty-nine sample measurement frames were used to validate the computer program, with a mean error of estimation of 0.79cm. Bench box testing compared to a test group showed the prototype's best performance at larger distances (150cm), with a "human in the loop" system. In the video validation, the prototype demonstrated low measurement variability. CV-based techniques can be effectively used to reduce subjectivity of intracorporeal measurement by delivering an objective measurement during image-guided surgery.
- Research Article
- 10.1002/anie.202522260
- Dec 8, 2025
- Angewandte Chemie (International ed. in English)
- Weili Wang + 9 more
Achieving high fluorescence efficiency in organic fluorophores within the second near-infrared window (NIR-II, 1000∼1700nm) remains challenging, as extended π-conjugation and active intramolecular motions typically funnel excitation energy into non-radiative decay. Here, we present peripheral cyanation as a molecular design strategy that directly modulates excited-state dynamics and suppresses non-radiative relaxation. Incorporation of cyano groups (A') into the D-A-D scaffold of BBTCz afforded BBTCzCN with an A'-D-A-D-A' architecture, which significantly reduced vibronic coupling compared to the parent dye. Upon encapsulation with DSPE-mPEG5000, BBTCzCN nanoparticles (NPs) retained a high FLQY of 2.8% with a record-high brightness of 565M-1 cm-1, representing a 10.4-fold enhancement over BBTCz NPs and placing it among the brightest organic NIR-II emitters reported to date. Mechanistic studies combining density functional theory and ultrafast spectroscopy revealed that cyanation synergistically suppressed vibrational relaxation and internal conversion, thereby prolonging radiative decay pathways. As a result, BBTCzCN NPs enabled high-resolution vascular imaging, real-time lymphatic tracking, and precise intraoperative delineation of tumors and peritoneal metastases. This work establishes peripheral cyanation as a broadly applicable molecular design strategy for tailoring excited-state decay pathways, advancing the development of next-generation NIR-II fluorophores for deep-tissue imaging and image-guided surgery.
- Research Article
- 10.1002/mp.70149
- Nov 24, 2025
- Medical physics
- Runzhe Han + 5 more
Image-guided surgery is a critical technique in maxillofacial surgery. The foundation of image-guided surgery is image registration. Traditional image registration methods have limitations in terms of invasiveness, complexity, and unsatisfied accuracy. Freehand 3D ultrasound (US) imaging using a tracked 2D US probe may offer a non-invasive, real-time, and accurate alternative. Purpose This study aims to develop a novel freehand 3D US imaging framework for midfacial bone surface reconstruction and registration with preoperative 3D data (e.g., computed tomography), enabling accurate intraoperative surgical navigation in maxillofacialsurgery. First, a customized stereo camera is used to track the pose of a 2D US probe during the freehand US scanning toward the midfacial bone surface. Then, a short-term dense concatenate network (STDC) is employed to segment the bone surface from the US image. The segmented pixels with spatial information form a coarse 3D volume in real time. The 3D volume's voxels are then converted to a coarse point cloud. A template matching denoising technique is utilized to remove noisy and outlier points, followed by a self-supervised Freehand 3D Ultrasound Neural Surface Reconstruction network (FUNSR) to reconstruct the point cloud to a smooth surface mesh. Finally, the resulting fine bone surface is registered with preoperative 3D data for quantitative evaluation. A total of 1000 zygomatic ultrasound images (split into 700 training, 150 validation, and 150 test images) were used to train the segmentation network. The reconstruction network was trained with self-supervision. The reconstruction accuracy of the network was validated using surface registration error (SRE), and the registration accuracy was verified using target registration error (TRE). Method performance improvement was evaluated using t-tests and analysis of variance, with Tamhane's T2 test applied for multiple comparison correction to control the false discovery rate. Cohen's effect sizes were calculated to quantify performancedifferences. In the phantom experiment, the average SRE was 0.387 0.034mm, and the average TRE was 0.802 0.177mm. Compared with registration using only voxel reconstruction results (SRE = 1.301 0.133mm, TRE = 1.155 0.359mm), the accuracy was improved (Cohen's d = 9.416 for SRE, Cohen's d = 1.247 for TRE, and 0.01 for both). Also, the accuracy remained uniform across various regions of the midface ( 0.918). When using only local region reconstruction for registration, the decrease in overall accuracy is relatively minor ( 0.025). In the volunteer trials, the average SRE was 0.445 0.099mm. Compared with the fundamental framework of our method (SRE = 0.955 0.204mm), the proposed template matching denoising and surface reconstruction components further enhance the registration accuracy ( 0.001, Cohen's d 2.0). The proposed freehand 3D US imaging framework could offer a noninvasive, accurate, and quasi-real-time solution for midfacial bone surface reconstruction and image registration in maxillofacialsurgery.
- Research Article
- 10.1002/mp.70105
- Nov 21, 2025
- Medical physics
- Arnaud R Brian-Choux + 6 more
Photon-counting detector (PCD) CT offers enhanced spatial resolution, improved image contrast, reduced radiation dose, and material differentiation through K-edge imaging. These features may be of value for guiding microrobots and surgical instruments during minimally invasive brain surgery. This study evaluates the feasibility of using K-edge imaging with deep silicon (dSi) PCD CT to estimate the pose and location of microrobots within the human head, utilizing materials like neodymium (Nd) and tungsten (W) with distinct K-edge energies. A micro-driller robot with a cubic Nd magnet (K-edge energy=43.5keV) and a 3D-printed drill bit was used. W (K-edge energy=69.5keV) was added to aid in orientation detection. These materials were positioned at distances of 0-1mm within a 3D-printed setup and placed in a human skull filled with ballistic gel, alongside other metallic parts. The setup was scanned using a prototype dSi CT scanner with eight energy bins. Scanning parameters included 120 kVp, 300 mAs, 0.5 s rotation time, and 4-cm z-collimation. Nd and W components were distinguishable, with Nd showing higher contrast in low-energy virtual monoenergetic images (VMIs) and W being more radiopaque in higher VMIs. The signal-to-noise ratios (SNR) at 70keV and 1-mm distance were 31.0 for Nd and 132.7 for W. The error in estimating the distance between components (ΔL) ranged from 82 to 263µm comparable to one image voxel. Deep silicon PCD-CT with K-edge imaging successfully differentiated microrobot components in a human head phantom, indicating its potential for precision image-guided robotic surgery. Further studies are needed to optimize radiation doses for clinical use.
- Research Article
- 10.1007/s00259-025-07626-z
- Nov 19, 2025
- European journal of nuclear medicine and molecular imaging
- Giacomo Gariglio + 9 more
Complete and minimally invasive cancer surgery remains challenging. Targeting the fibroblast activation protein (FAP) offers valuable opportunities for surgical planning, intraoperative guidance and improved resection outcomes. Herein, we developed the first dimeric, dual-modality FAP-targeted imaging agents and investigated the influence of different near-infrared cyanine-7 dyes on their final properties. Four dual-modality ligands based on the Fusarinine C scaffold were synthesized. Their FAP specificity and retention were evaluated in cellular and xenograft tumor models. The most promising candidates were labelled with 67/68Ga and assessed in vivo at early time points by PET/CT imaging and by comparative SPECT/CT and NIR fluorescence imaging (FI) up to two days post-injection. Distinct fluorophore influences on the properties of the final compounds were identified. The introduction of the s775z dye demonstrated a beneficial effect on the cellular uptake and on the in vivo biodistribution profile as revealed by the greatest improvement in blood clearance and the least off-target accumulation in liver and kidneys when compared to the control and to the other candidates respectively. Ex vivoexperiments andin vivoPET/CT, SPECT/CT and FI studies in xenograftedmice confirmed these findings and demonstratedsustained tumor uptake (> 7% ID/g and > 5% ID/g at 1hand 1day p.i. respectively) for 67Ga-s775z-FFAPi and 67Ga-IRDye-FFAPi. In this study we introduced and evaluated novel dimeric FAP-targeting agents for dual-modality applications. In the preclinical setting, within the group of compounds investigated, two candidates enabled tumor visualization through PET, SPECT and optical imaging, providing satisfactory background contrast after a single administration and supporting their potential for preoperative nuclear imaging and subsequent fluorescence-guided surgery.