Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • Cite Count Icon 7
  • 10.3390/digital5030026
Artificial Intelligence in Construction Project Management: A Structured Literature Review of Its Evolution in Application and Future Trends
  • Jul 9, 2025
  • Digital
  • Yetunde Adebayo + 3 more

The integration of Artificial Intelligence (AI) in construction project management is revolutionising the industry; offering innovative solutions to enhance efficiency, reduce costs, and improve decision making. This structured literature review explored the current applications, benefits, challenges, and future trends of AI in construction project management. This study synthesised findings from 135 peer-reviewed articles published between 1985 and 2024; representing Industry 3.0 (3IR), Industry 4.0 (4IR), and Industry 4.0 Post COVID-19 (4IR PC). Analysis showed that the Planning and Monitoring and Control phases of the project have the greatest application of AI, while decision making, prediction, optimisation, and performance improvement are the most common purposes of AI use in the construction industry. The drivers of AI adoption within the construction industry include technology availability, project outcome and performance improvement, a competitive advantage, and a focus on sustainability. Despite these advancements, the review revealed several barriers to AI adoption, including data integration issues, the high cost of AI implementation, resistance to change among stakeholders, and ethical concerns surrounding data privacy, amongst others. This review also identified future ongoing applications of AI in the construction industry, such as sustainability and energy efficiency, digital twins, advanced robotics and autonomous construction, and optimisation. By providing a comprehensive analysis of the evolution of practices and the future direction of AI application, this study serves as a resource for researchers, practitioners, and policymakers seeking to understand the evolving landscape of AI in construction project management.

  • Open Access Icon
  • Research Article
  • 10.3390/digital5030025
Correction: Williady et al. Investigating Efficiency and Innovation: An Exploratory and Predictive Analysis of Smart Airport Systems. Digital 2024, 4, 599–612
  • Jul 1, 2025
  • Digital
  • Angellie Williady + 2 more

The authors would like to make the following corrections to the published paper [...]

  • Open Access Icon
  • Research Article
  • Cite Count Icon 3
  • 10.3390/digital5030024
ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy
  • Jun 28, 2025
  • Digital
  • Dag Øivind Madsen + 1 more

ChatGPT, a prominent large language model developed by OpenAI, has rapidly become embedded in digital infrastructures across various sectors. This narrative review examines its evolving role and societal implications in three key domains: healthcare, education, and the economy. Drawing on recent literature and examples, the review explores ChatGPT’s applications, limitations, and ethical challenges in each context. In healthcare, the model is used to support patient communication and mental health services, while raising concerns about misinformation and privacy. In education, it offers new forms of personalized learning and feedback, but also complicates assessment and equity. In the economy, ChatGPT augments business operations and knowledge work, yet introduces risks related to job displacement, data governance, and automation bias. The review synthesizes these developments to highlight how ChatGPT is driving digital transformation while generating new demands for oversight, regulation, and critical inquiry. It concludes by outlining priorities for future research and policy, emphasizing the need for interdisciplinary collaboration, transparency, and inclusive access as generative AI continues to evolve.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/digital5020023
Lightweight Interpretable Deep Learning Model for Nutrient Analysis in Mobile Health Applications
  • Jun 17, 2025
  • Digital
  • Zvinodashe Revesai + 1 more

Nutrient analysis through mobile health applications can improve dietary choices, particularly among vulnerable populations. Current mobile nutrient analysis applications face critical limitations: sophisticated deep learning models require substantial computational resources unsuitable for budget devices, while lightweight solutions sacrifice accuracy and lack interpretability necessary for user trust. We develop a lightweight interpretable deep learning architecture combining depthwise separable convolutions, Shuffle Attention mechanisms, and knowledge distillation with integrated Grad-CAM and LIME explanations for real-time interpretability. Our model achieves 97.1% food recognition accuracy (98.0% with cross-validation) and 7.2% mean absolute error in nutrient estimation while maintaining an 11 MB footprint and 150 ms inference time. Knowledge distillation reduces the model size by 62% and energy consumption by 36% while improving accuracy by 2.2 percentage points over non-distilled training. Targeted optimisation for food security categories achieves 94.1% accuracy for staple foods, 93.2% for affordable proteins, and 92.8% for accessible produce. Interpretability methods demonstrate 0.91 feature consistency scores with 38–45 ms explanation generation. These results demonstrate the first mobile nutrient analysis system combining state-of-the-art accuracy with computational efficiency suitable for resource-constrained deployment, addressing accessibility barriers for vulnerable populations.

  • Open Access Icon
  • Research Article
  • 10.3390/digital5020022
Integration of YOLOv9 Segmentation and Monocular Depth Estimation in Thermal Imaging for Prediction of Estrus in Sows Based on Pixel Intensity Analysis
  • Jun 13, 2025
  • Digital
  • Iyad Almadani + 2 more

Many researchers focus on improving reproductive health in sows and ensuring successful breeding by accurately identifying the optimal time of ovulation through estrus detection. One promising non-contact technique involves using computer vision to analyze temperature variations in thermal images of the sow’s vulva. However, variations in camera distance during dataset collection can significantly affect the accuracy of this method, as different distances alter the resolution of the region of interest, causing pixel intensity values to represent varying areas and temperatures. This inconsistency hinders the detection of the subtle temperature differences required to distinguish between estrus and non-estrus states. Moreover, failure to maintain a consistent camera distance, along with external factors such as atmospheric conditions and improper calibration, can distort temperature readings, further compromising data accuracy and reliability. Furthermore, without addressing distance variations, the model’s generalizability diminishes, increasing the likelihood of false positives and negatives and ultimately reducing the effectiveness of estrus detection. In our previously proposed methodology for estrus detection in sows, we utilized YOLOv8 for segmentation and keypoint detection, while monocular depth estimation was used for camera calibration. This calibration helps establish a functional relationship between the measurements in the image (such as distances between labia, the clitoris-to-perineum distance, and vulva perimeter) and the depth distance to the camera, enabling accurate adjustments and calibration for our analysis. Estrus classification is performed by comparing new data points with reference datasets using a three-nearest-neighbor voting system. In this paper, we aim to enhance our previous method by incorporating the mean pixel intensity of the region of interest as an additional factor. We propose a detailed four-step methodology coupled with two stages of evaluation. First, we carefully annotate masks around the vulva to calculate its perimeter precisely. Leveraging the advantages of deep learning, we train a model on these annotated images, enabling segmentation using the cutting-edge YOLOv9 algorithm. This segmentation enables the detection of the sow’s vulva, allowing for analysis of its shape and facilitating the calculation of the mean pixel intensity in the region. Crucially, we use monocular depth estimation from the previous method, establishing a functional link between pixel intensity and the distance to the camera, ensuring accuracy in our analysis. We then introduce a classification approach that differentiates between estrus and non-estrus regions based on the mean pixel intensity of the vulva. This classification method involves calculating Euclidean distances between new data points and reference points from two datasets: one for “estrus” and the other for “non-estrus”. The classification process identifies the five closest neighbors from the datasets and applies a majority voting system to determine the label. A new point is classified as “estrus” if the majority of its nearest neighbors are labeled as estrus; otherwise, it is classified as “non-estrus”. This automated approach offers a robust solution for accurate estrus detection. To validate our method, we propose two evaluation stages: first, a quantitative analysis comparing the performance of our new YOLOv9 segmentation model with the older U-Net and YOLOv8 models. Secondly, we assess the classification process by defining a confusion matrix and comparing the results of our previous method, which used the three nearest points, with those of our new model that utilizes five nearest points. This comparison allows us to evaluate the improvements in accuracy and performance achieved with the updated model. The automation of this vital process holds the potential to revolutionize reproductive health management in agriculture, boosting breeding success rates. Through thorough evaluation and experimentation, our research highlights the transformative power of computer vision, pushing forward more advanced practices in the field.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/digital5020021
Beauty Tech—Customer Experience and Loyalty of Augmented Reality- and Artificial Intelligence-Driven Cosmetics
  • Jun 13, 2025
  • Digital
  • Jens K Perret + 1 more

Cosmetics companies are increasingly integrating augmented reality and artificial intelligence technologies into products and services referred to as beauty tech; consumer perceptions of these solutions, however, remain understudied. Data generated via an online survey are implemented in a stimulus–organism–response framework, deduced from the beauty tech literature. Thereupon, the study identifies how interactivity, informativeness, personalization, and service quality of digital and physical beauty tech solutions for home use affect utilitarian and hedonistic values and the perceived risk factors among consumers. Via customer satisfaction, the effect of the value perception on the purchase intention and loyalty is considered. Results hint at strong effects of characteristics of the services and applications on the utilitarian and the hedonistic dimension of customer experience, which in turn strongly influence customer satisfaction. Perceived risk factors play only a marginal role. Only regarding the tested physical product does higher service quality add to the customer experience. Customer satisfaction in turn results in positive brand perception across different stages of the customer journey and leads to a higher purchase intention, positive brand advocacy, and a higher re-purchase intention. Consequently, well-designed solutions can generate higher customer satisfaction and loyalty on multiple stages along the customer journey.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/digital5020020
Assessing Digital Technology Development in Latin American Countries: Challenges, Drivers, and Future Directions
  • Jun 10, 2025
  • Digital
  • Diana Pamela Chavarry Galvez + 1 more

This research analyzes the digital readiness of Latin American countries by assessing the following key factors: digital infrastructure, human capital, internet use, adoption of digital technology by businesses, and digital government services. These factors are critical to the development of digital technology in the region. The analysis identifies countries that are leaders in digital development (Brazil, Mexico, Chile, Colombia, and Argentina), countries with an average level of digital technology development (Peru, Uruguay, Costa Rica, Paraguay, Panama, and the Dominican Republic), and those with slower progress (Bolivia, Ecuador, Venezuela, Guatemala, El Salvador, Honduras, Cuba, and Nicaragua). Based on this assessment, the study proposes and evaluates positive, negative, and neutral scenarios for the future of digital technology in Latin America over the next five years. The study concludes that a neutral scenario is the most likely, suggesting that, while advanced countries will maintain stable growth, lagging countries will experience accelerated, albeit still moderate, digitalization. This has key implications for regional competitiveness and digital inclusion. The study used methods of analysis, synthesis, classification, grouping, statistics, indexing, and scoring. This study uses the most recent data available (2022–2024) to provide an updated and comprehensive assessment of digital transformation in Latin America, reflecting post-pandemic dynamics and emerging digital trends such as AI and fintech growth.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 6
  • 10.3390/digital5020019
Real-Time Waste Detection and Classification Using YOLOv12-Based Deep Learning Model
  • Jun 9, 2025
  • Digital
  • Mosharof Hossain Dipo + 6 more

Increased waste volume and limitations of traditional separation methods have made waste management a hot topic in recent years. To enable the recycling process to be optimized and to minimize environmental impact, waste materials must be well detected and classified. Building on this research, the system is an automated waste-detecting system that integrates machine vision and artificial intelligence (AI). It is coupled with advanced convolutional neural networks (CNNs), which are used for data collection, real-time waste detection, and classification of the proposed framework. Images of waste were captured in many different settings and analyzed with a YOLOv12-based model. The system achieves more gain in detecting and categorizing waste types with 73% precision and a mean average precision (mAP) of 78% in 100 epochs. Results indicate that the YOLOv12 model surpasses the current detection algorithms to provide an efficient and scalable solution to waste management challenges.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/digital5020018
A Methodology for Building a Medical Ontology with a Limited Domain Experts’ Involvement
  • May 28, 2025
  • Digital
  • Sabrina Azzi

Ontology development is a multidisciplinary work involving domain experts and knowledge engineers. Bringing together such a team to develop an ontology of quality is not easy. Therefore, ontologies are often created with limited expertise either in the medical domain or in ontology engineering. Unfortunately, the existing methodologies do not provide much guidance on how the different steps of ontology development should be performed, particularly in the case of reduced involvement of domain experts. This challenge is getting more difficult when there is a multitude of medical knowledge sources and ontologies covering parts of the domain, and often, each has a different representation of the same concept, for example, as a symptom, disease, or clinical sign. This research presents a methodology for creating a medical ontology of quality with limited involvement of the domain experts. The latter are only consulted in the domain definition and evaluation phases. We combine building an ontology from codified knowledge and ontology reuse to enhance reusability and interoperability. The methodology is inspired by METHONTOLOGY, for which we make several improvements, especially in the ontology reuse phase. We provide proof of concept of the proposed methodology with a case study involving the development of the pneumonia diagnosis ontology (PNADO).

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3390/digital5020017
Personalized Course Recommendation System: A Multi-Model Machine Learning Framework for Academic Success
  • May 22, 2025
  • Digital
  • Md Sajid Islam + 1 more

The increasing complexity of academic programs and student needs necessitates personalized, data-driven academic advising. Traditional heuristic-based methods often fail to optimize course selection, leading to inefficient academic planning and delayed graduations. This study introduces a hierarchical multi-model machine learning framework for personalized course recommendations, integrating five predictive models: Success Probability Model (SPM), Course Fit Score Model (CFSM), Prerequisite Fulfillment Model (PFM), Graduation Priority Model (GPM), and Recommended Load Model (RLM). These models operate independently in a local model framework, generating specialized predictions that are synthesized by a global model framework through a meta-function. The meta-function aggregates predictions to compute a final score for each course and ensures recommendations align with student success probabilities, program requirements, and workload constraints. It enforces key constraints, such as prerequisite satisfaction, workload optimization, and program-specific requirements, refining recommendations to be both academically viable and institutionally compliant. The framework demonstrated strong predictive performance, with root mean squared error values of 0.00956, 0.011713, and 0.005406 for SPM, CFSM, and RLM, respectively. Classification models for PFM and GPM also yielded high accuracy, exceeding 99%. Designed for modularity and adaptability, the framework allows for the integration of additional predictive models and fine-tuning of recommendation priorities to suit institutional needs. This scalable solution enhances academic advising efficiency by transforming granular model predictions into personalized, actionable course recommendations, supporting students in making informed academic decisions.