Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
Image synthesis with class-aware semantic diffusion models for surgical scene segmentation.

Surgical scene segmentation is essential for enhancing surgical precision, yet it is frequently compromised by the scarcity and imbalance of available data. To address these challenges, semantic image synthesis methods based on generative adversarial networks and diffusion models have been developed. However, these models often yield non-diverse images and fail to capture small, critical tissue classes, limiting their effectiveness. In response, a class-aware semantic diffusion model (CASDM), a novel approach which utilizes segmentation maps as conditions for image synthesis to tackle data scarcity and imbalance is proposed. Novel class-aware mean squared error and class-aware self-perceptual loss functions have been defined to prioritize critical, less visible classes, thereby enhancing image quality and relevance. Furthermore, to the authors' knowledge, they are the first to generate multi-class segmentation maps using text prompts in a novel fashion to specify their contents. These maps are then used by CASDM to generate surgical scene images, enhancing datasets for training and validating segmentation models. This evaluation assesses both image quality and downstream segmentation performance, demonstrates the strong effectiveness and generalisability of CASDM in producing realistic image-map pairs, significantly advancing surgical scene segmentation across diverse and challengingdatasets.

Read full abstract
Open Access Icon Open Access
Identifying factors shaping the behavioural intention of Nepalese youths to adopt digital health tools.

The digitalization of healthcare has gained global importance, especially post-COVID-19, yet remains a challenge in developing countries due to the slow adoption of digital health tools. This study aims to identify major predictors impacting the behavioural intention of Nepalese youths to adopt digital health tools by utilizing the framework based on the extended unified theory of acceptance and use of technology (UTAUT-2). The cross-sectional data from 280 respondents was collected from youths (i.e., aged 16-40) in the Kathmandu Valley and were analyzed through PLS-SEM. Most of the respondents were using smartwatches followed by blood pressure monitors and pulse oximeters. The findings revealed hedonic motivation as the strongest predictor of behavioural intention to use digital health tools followed by facilitating conditions, social influence, habit, and performance expectancy. The behavioural intention significantly influenced actual usage behaviour. Additionally, behavioural intention mediated the relationship between the above-mentioned five constructs and usage behaviour, except for effort expectancy and price value. The study emphasizes the role of major predictors such as facilitating conditions in shaping the intention of youths to adopt digital health tools providing insights for government, hospitals, and developers to understand consumer perceptions and motivations.

Read full abstract
Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance.

Arthroscopy is a minimally invasive surgical procedure used to diagnose and treat joint problems. The clinical workflow of arthroscopy typically involves inserting an arthroscope into the joint through a small incision, during which surgeons navigate and operate largely by relying on their visual assessment through the arthroscope. However, the arthroscope's restricted field of view and lack of depth perception pose challenges in navigating complex articular structures and achieving surgical precision during procedures. Aiming at enhancing intraoperative awareness, a robust pipeline that incorporates simultaneous localization and mapping, depth estimation, and 3D Gaussian splatting (3D GS) is presented to realistically reconstruct intra-articular structures solely based on monocular arthroscope video. Extending 3D reconstruction to augmented reality (AR) applications, the solution offers AR assistance for articular notch measurement and annotation anchoring in a human-in-the-loop manner. Compared to traditional structure-from-motion and neural radiance field-based methods, the pipeline achieves dense 3D reconstruction and competitive rendering fidelity with explicit 3D representation in 7 min on average. When evaluated on four phantom datasets, our method achieves root-mean-square-error reconstruction error, peak signal-to-noise ratio and structure similarity index measure on average. Because the pipeline enables AR reconstruction and guidance directly from monocular arthroscopy without any additional data and/or hardware, the solution may hold the potential for enhancing intraoperative awareness and facilitating surgical precision in arthroscopy. The AR measurement tool achieves accuracy within and the AR annotation tool achieves a mIoU of 0.721.

Read full abstract
Open Access Icon Open Access
Writing the Signs: An Explainable Machine Learning Approach for Alzheimer's Disease Classification from Handwriting.

Alzheimer's disease is a global health challenge, emphasizing the need for early detection to enable timely intervention and improve outcomes. This study analyzes handwriting data from individuals with and without Alzheimer's to identify predictive features across copying, graphic and memory-based tasks. Machine learning models, including Random Forest, Bootstrap Aggregating (Bagging), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Adaptive Boosting (AdaBoost) and Gradient Boosting, were applied to classify patients, with SHapley Additive exPlanations (SHAP) enhancing model interpretability. Time-related features were crucial in copying and graphic tasks, reflecting cognitive processing speed, while pressure-related features were significant in memory tasks, indicating recall confidence. Simpler graphic tasks showed strong discriminatory power, aiding early detection. Performance metrics demonstrated model effectiveness: For memory tasks, Random Forest achieved the highest accuracy ( ), while Bagged SVC was the lowest ( ). Copying tasks recorded a peak accuracy of with Gradient Boost and a low of for Bagged SVC. Graphic tasks reached with Gradient Boost and 0.643±0.071 with AdaBoost. For all tasks combined, Random Forest excelled ( ), while Gradient Boost performed worst ( ). These results highlight handwriting analysis's potential in Alzheimer'sdetection.

Read full abstract
Open Access Icon Open Access
Deep regression 2D-3D ultrasound registration for liver motion correction in focal tumour thermal ablation.

Liver tumour ablation procedures require accurate placement of the needle applicator at the tumour centroid. The lower-cost and real-time nature of ultrasound (US) has advantages over computed tomography for applicator guidance, however, in some patients, liver tumours may be occult on US and tumour mimics can make lesion identification challenging. Image registration techniques can aid in interpreting anatomical details and identifying tumours, but their clinical application has been hindered by the tradeoff between alignment accuracy and runtime performance, particularly when compensating for liver motion due to patient breathing or movement. Therefore, we propose a 2D-3D US registration approach to enable intra-procedural alignment that mitigates errors caused by liver motion. Specifically, our approach can correlate imbalanced 2D and 3D US image features and use continuous 6D rotation representations to enhance the model's training stability. The dataset was divided into 2388, 196, and 193 image pairs for training, validation and testing, respectively. Our approach achieved a mean Euclidean distance error of and a mean geodesic angular error of , with a runtime of per 2D-3D US image pair. These results demonstrate that our approach can achieve accurate alignment and clinically acceptable runtime, indicating potential for clinicaltranslation.

Read full abstract
Open Access Icon Open Access