Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Open Access Icon
  • Supplementary Content
  • 10.1049/htl2.70035
Respiratory Rate Measurement Using Mobile Applications in Healthcare Settings: A Scoping Review
  • Jan 28, 2026
  • Healthcare Technology Letters
  • Lachlan Sallabank + 4 more

ABSTRACTRespiratory rate (RR) is a strong indicator of clinical trajectory and forms the basis of patient care and assessment. However, clinicians often face barriers to easily obtaining a RR without inefficient methods or costly technology. To remedy this, several phone applications have emerged where clinicians can tap out each breath to calculate a RR. We aimed to map the available evidence for tap‐per‐breath applications used in healthcare settings. We searched for articles using multiple databases, including primary research articles that evaluated tap‐per‐breath apps in healthcare settings. 14 articles were selected for this review, mostly cross‐sectional and hospital based. Most applications reported high usability and efficiency, although results of accuracy were mixed across the included literature. Median‐based apps were more often an accurate measure of RR, however more research is required. Articles were commonly limited in generalisability due to poorly defined reference standards, small sample sizes, or using retrospective video recordings for patient assessment. Studies showed favourable usability and efficiency across the literature, with median‐based apps demonstrating greater consistency and accuracy of RR measurements. Though the scope of this review and limited evidence restrict any far‐reaching clinical implications until further evidence emerges.

  • New
  • Open Access Icon
  • Research Article
  • 10.1049/htl2.70052
Wavelet‐Based Denoising Optimization for Endoscopic Gastric Slow‐Wave Recordings
  • Jan 24, 2026
  • Healthcare Technology Letters
  • Peter Tremain + 6 more

ABSTRACTNew, minimally invasive, endoscopic methods for recording gastric bioelectrical slow waves from the mucosal surface are emerging to address the current limitations of invasive recordings. Filtering techniques for these new methods have relied on protocols developed for invasive recordings. Updated signal processing techniques, such as discrete wavelet transformation (DWT), optimised for endoscopic recording conditions, promise more effective noise removal for these signals. Synthetic signals were constructed using averaged slow‐wave data and noise segmented from existing endoscopic gastric bioelectrical recordings from 12 patients. DWT was performed on the synthetic signals using 989 different parameter combinations to remove noise. Savitzky‐Golay (SG) filtering was also performed on the synthetic signals to provide a comparative baseline for classical filter performance. Combined SG filtering and DWT was then investigated using the top‐performing DWT parameters. Filter performance was evaluated using six established metrics, along with the inspection of the power spectral density (PSD) calculated on sample signals. Statistical significance was analysed using a paired two‐tailed Student's t‐test or Wilcoxon signed‐rank test. For signals with moderate signal‐to‐noise ratio (SNR), DWT‐based methods outperformed traditional SG filtering in all metrics considered: signal‐distortion ratio (0.84 ± 0.45 vs. 1.34 ± 0.99), root‐mean‐square error (280 ± 150 µV vs. 450 ± 330 µV), percentage root‐mean‐square difference (78 ± 42% vs. 113 ± 83%), noise‐correction ratio (0.94±0.17 vs. 0.50±0.26), SNR improvement (5.9±3.0 dB vs. 2.1±2.7 dB) and filter performance metric (0.96 ± 0.42 vs. 1.8 ± 1.2). All p‐values were <0.05. The combination of SG filtering with DWT provided improved signal denoising when compared to SG filtering alone, whilst offering reduced aggressiveness when compared to DWT alone. Inspection of the calculated PSDs for sample signals reaffirmed these results. The results presented in this study indicate that for endoscopic gastric bioelectrical recordings, with moderate SNR, modern denoising techniques based on DWT can outperform traditional SG filtering. More efficient noise removal using DWT can allow for better automated detection of slow‐wave activations and more reliable, efficient data processing.

  • Journal Issue
  • 10.1049/htl2.v13.1
  • Jan 1, 2026
  • Healthcare Technology Letters

  • Open Access Icon
  • Research Article
  • 10.1049/htl2.70040
Comparative Evaluation of Ultrasound‐Guided Peripheral Intravenous Catheter Insertion Techniques in a Virtual Reality Simulator
  • Jan 1, 2026
  • Healthcare Technology Letters
  • Alejandro Olivares + 3 more

ABSTRACTPeripheral intravenous catheter (PIVC) insertion is a common yet challenging procedure. Although ultrasound guidance improves procedural accuracy and patient outcome, its complexity limits its routine adoption to highly experienced clinicians. This paper introduces a virtual reality (VR) simulator developed specifically for training in ultrasound‐guided PIVC insertions. This study aims to validate the simulator's realism and relevance through face, content, and construct assessments, and to demonstrate its utility as a platform for comparing various approaches to PIVC insertion. Thirty participants from diverse medical backgrounds and levels of expertise completed three scenarios, each featuring a different procedural technique, within the simulator's controlled virtual environment. The simulator demonstrated strong face and content validity, with participants rating its realism at 7.1/10 and enjoyment at 8.2/10. Performance data showed that expert participants maintained higher success rates and performance across all procedural scenarios, supporting the simulator's construct validity. In the standard approach scenario, novices required 230.91 ± 158.77 s to complete the task and achieved only a 45% success rate compared to experts' 95.48 ± 65.74 s and 80% success rate. In the procedural scenario involving an alignment assistance device, where needle insertion was aligned with the ultrasound image plane, novice success rates increased to 75% and the number of attempts decreased from 8.95 ± 6.69 to 2.75 ± 2.67, narrowing the performance gap with experts. These findings highlight the simulator's potential not only as an effective training tool but also as a platform for the objective evaluation of different procedural techniques.

  • Open Access Icon
  • Research Article
  • 10.1049/htl2.70051
Ensemble Machine Learning Approaches for Automated Fungal Keratitis Diagnosis Using In Vivo Confocal Microscopy Images
  • Dec 19, 2025
  • Healthcare Technology Letters
  • Sowmya Kamath S + 6 more

ABSTRACTFungal keratitis (FK) is a severe ocular infection that can lead to significant vision problems or blindness if not diagnosed and treated promptly. Early and accurate detection of FK is essential for effective management. Traditional diagnostic methods are often time‐consuming and require specialized laboratory resources. Recently, advances in artificial intelligence and computer vision have enabled automated diagnosis of FK using slit‐lamp images. In this article, a comprehensive evaluation of state‐of‐the‐art techniques adopted for classifying FK using in vivo confocal microscopy (IVCM) images is presented. Detailed experiments and performance evaluation of various machine learning models are systematically performed, with a focus on evaluating the effect of diverse techniques for image processing, data augmentation, hyperparameters and model finetuning to assess each model's strengths and limitations. Experiments revealed that applying green channel preprocessing with a 12‐feature set achieved 99% accuracy using Random Forest, highlighting its effectiveness in FK detection, while complex techniques like histogram modelling reduced accuracy to 64%. Robust models like AdaBoost and RUSBoost maintained high F1‐scores, demonstrating adaptability to imbalanced medical datasets, and to real‐world clinical scenarios.

  • Research Article
  • 10.1049/htl2.70030
A Landmark‐Free 3D–2D Rigid Liver Registration via Point Cloud Matching for Laparoscopic Surgery
  • Dec 3, 2025
  • Healthcare Technology Letters
  • Binyan Huang + 2 more

ABSTRACTReal‐time registration of preoperative 3D liver models to intraoperative 2D laparoscopic images is essential for augmented reality navigation in minimally invasive liver surgery. However, 3D–2D registration typically depends on anatomical landmarks extraction and pose estimation based on iterative projection‐based landmark distance computation, which is time‐consuming. Unlike iterative pose refinement strategies, our method treats liver pose estimation as a partial‐to‐complete point matching problem. First, our method leverages monocular depth estimation to reconstruct partial intraoperative point clouds from a single RGB image. Then, a two‐stage point matching framework establishes dense 3D–3D correspondences, ultimately inferring the 6‐DoF rigid pose by solving a weighted SVD over the matched point pairs. The experiments on the P2ILF dataset have a reprojection error of 126.37 ± 48.98 pixels and a target registration error of 25.20 mm on the LLR‐LUS dataset. These results indicate that our method achieves promising accuracy and efficiency in aligning preoperative models to intraoperative scenes, suggesting its potential for practical rigid alignment in near real‐time laparoscopic liver AR navigation.

  • Open Access Icon
  • Supplementary Content
  • 10.1049/htl2.70025
Augmented Reality in Outpatient Care: A Narrative Review
  • Nov 22, 2025
  • Healthcare Technology Letters
  • Archan Khandekar + 9 more

ABSTRACTIntroductionAugmented reality (AR) is seeing an increase in its applications in healthcare, but its reach in outpatient care remains undefined. Patients in outpatient settings face poor medical understanding. AR may help address this gap between patients and physicians through immersive and interactive models and supporting tools. This narrative review aims to evaluate the status of AR in outpatient care, categorise its applications, and identify limitations and future research needs.MethodsFour databases–PubMed, Embase, Web of Science and Cochrane Library–were conducted for peer‐reviewed studies published from January 2015 to February 2025. Studies were included if they regarded AR interventions in outpatient care settings. Studies were analysed and grouped thematically into five clinical domains of AR intervention.ResultsAfter review, 19 studies–spanning 987 participants–were included. AR applications were categorised into patient education and engagement (n = 3), cognitive and functional assessment (n = 3), device interaction and remote monitoring (n = 3), procedural guidance in outpatient interventions (n = 5), and rehabilitation and functional recovery support (n = 5). Most included studies were pilot studies (n = 6) and had relatively small sample sizes (median = 28). Studies proved that AR interventions consistently improved patient understanding, engagement and procedural support. Nevertheless, studies faced limitations including the need for specialised and bulky hardware–which affected patient comfort as well–reliability issues, technical difficulties and platform‐specific inconsistencies.ConclusionAR has been proved to have the potential to improve outpatient care across five main areas: patient education, cognitive and functional assessment, medical device interaction, procedural guidance and rehabilitation. Studies consistently support that AR enhances patient comprehension, engagement and procedural accuracy while allowing for remote monitoring and personalised therapy. Furthermore, AR interventions demonstrate high usability and clinical relevance. Nevertheless, limitations such as hardware complexity and inconsistent technical performance remain. Future research should prioritise large‐scale RCTs and strategies to integrate AR into pre‐existing digital workflows.

  • Research Article
  • 10.1049/htl2.70034
Augmented Reality With Dynamic Anatomy Modelling for Knee Arthroscopy
  • Nov 21, 2025
  • Healthcare Technology Letters
  • Deokgi Jeung + 4 more

ABSTRACTResearch on augmented reality (AR) for knee arthroscopy has not adequately focused on knee flexion during surgery. To overcome major AR errors caused by knee movement, this study presents an association model between the finite‐element models of the knee surface and bones to enable dynamic anatomy modelling. The association model allows the displacement of the knee surface elements and the reaction force of the bone elements to interact with each other. During knee flexion, the real‐time shape of the knee is captured with a colour and depth camera, and the association model deforms accordingly from the extension to the flexion state. The proposed model was evaluated using computed tomography data from the knees of six participants. The results showed that the association model successfully compensates for the movement of the femur and tibia within an error margin of only 3.85 mm around the drilling area. The proposed model could therefore enable effective AR‐based surgical navigation during knee surgeries.

  • Open Access Icon
  • Research Article
  • 10.1049/htl2.70012
An Analysis of Monitoring Solutions for CAR T Cell Production.
  • Jan 1, 2025
  • Healthcare technology letters
  • Arber Shoshi + 10 more

The chimeric antigen receptor T cell (CAR T) therapy has shown remarkable results in treating certain cancers. It involves genetically modifying a patient's T cells to recognize and attack cancer cells. Despite its potential, CAR T cell therapy is complex and costly and requires the integration of multiple technologies and specialized equipment. Further research is needed to achieve the maximum potential of CAR T cell therapies and to develop effective and efficient methods for their production. This paper presents an overview of current measurement methods used in the key steps of the production of CAR T cells. The study aims to assess the state of the art in monitoring solutions and identify their potential for online monitoring. The results of this paper contribute to the understanding of measurement methods in CAR T cell manufacturing and identify areas where on-line monitoring can be improved. Thus, this research facilitates progress toward the development of effective monitoring of CAR T cell therapies.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1049/htl2.70003
Image synthesis with class-aware semantic diffusion models for surgical scene segmentation.
  • Jan 1, 2025
  • Healthcare technology letters
  • Yihang Zhou + 3 more

Surgical scene segmentation is essential for enhancing surgical precision, yet it is frequently compromised by the scarcity and imbalance of available data. To address these challenges, semantic image synthesis methods based on generative adversarial networks and diffusion models have been developed. However, these models often yield non-diverse images and fail to capture small, critical tissue classes, limiting their effectiveness. In response, a class-aware semantic diffusion model (CASDM), a novel approach which utilizes segmentation maps as conditions for image synthesis to tackle data scarcity and imbalance is proposed. Novel class-aware mean squared error and class-aware self-perceptual loss functions have been defined to prioritize critical, less visible classes, thereby enhancing image quality and relevance. Furthermore, to the authors' knowledge, they are the first to generate multi-class segmentation maps using text prompts in a novel fashion to specify their contents. These maps are then used by CASDM to generate surgical scene images, enhancing datasets for training and validating segmentation models. This evaluation assesses both image quality and downstream segmentation performance, demonstrates the strong effectiveness and generalisability of CASDM in producing realistic image-map pairs, significantly advancing surgical scene segmentation across diverse and challengingdatasets.