Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040134
MCD-Temporal: Constructing a New Time-Entropy Enhanced Dynamic Weighted Heterogeneous Ensemble for Cognitive Level Classification
  • Dec 2, 2025
  • Informatics
  • Yuhan Wu + 3 more

Accurate classification of cognitive levels in instructional dialogues is essential for personalized education and intelligent teaching systems. However, most existing methods predominantly rely on static textual features and a shallow semantic analysis. They often overlook dynamic temporal interactions and struggle with class imbalance. To address these limitations, this study proposes a novel framework for cognitive-level classification. This framework integrates time entropy-enhanced dynamics with a dynamically weighted, heterogeneous ensemble strategy. Specifically, we reconstruct the original Multi-turn Classroom Dialogue (MCD) dataset by introducing time entropy to quantify teacher–student speaking balance and semantic richness features based on Term Frequency-Inverse Document Frequency (TF-IDF), resulting in an enhanced MCD-temporal dataset. We then design a Dynamic Weighted Heterogeneous Ensemble (DWHE), which adjusts weights based on the class distribution. Our framework achieves a state-of-the-art macro-F1 score of 0.6236. This study validates the effectiveness of incorporating temporal dynamics and adaptive ensemble learning for robust cognitive level assessment, offering a more powerful tool for educational AI applications.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040133
Fuzzy Ontology Embeddings and Visual Query Building for Ontology Exploration
  • Dec 1, 2025
  • Informatics
  • Vladimir Zhurov + 3 more

Ontologies play a central role in structuring knowledge across domains, supporting tasks such as reasoning, data integration, and semantic search. However, their large size and complexity—particularly in fields such as biomedicine, computational biology, law, and engineering—make them difficult for non-experts to navigate. Formal query languages such as SPARQL offer expressive access but require users to understand the ontology’s structure and syntax. In contrast, visual exploration tools and basic keyword-based search interfaces are easier to use but often lack flexibility and expressiveness. We introduce FuzzyVis, a proof-of-concept system that enables intuitive and expressive exploration of complex ontologies. FuzzyVis integrates two key components: a fuzzy logic-based querying model built on fuzzy ontology embeddings, and an interactive visual interface for building and interpreting queries. Users can construct new composite concepts by selecting and combining existing ontology concepts using logical operators such as conjunction, disjunction, and negation. These composite concepts are matched against the ontology using fuzzy membership-based embeddings, which capture degrees of membership and support approximate, concept-level similarity search. The visual interface supports browsing, query composition, and partial search without requiring formal syntax. By combining fuzzy semantics with embedding-based reasoning, FuzzyVis enables flexible interpretation, efficient computation, and exploratory learning. A usage scenario demonstrates how FuzzyVis supports subtle information needs and helps users uncover relevant concepts in large, complex ontologies.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040131
Hierarchical Fake News Detection Model Based on Multi-Task Learning and Adversarial Training
  • Nov 27, 2025
  • Informatics
  • Yi Sun + 1 more

The harmfulness of online fake news has brought widespread attention to fake news detection by researchers. Most existing methods focus on improving the accuracy and early detection of fake news, while ignoring the frequent cross-topic issues faced by fake news in online environments. A hierarchical fake news detection method (HAMFD) based on multi-task learning and adversarial training is proposed. Through the multi-task learning task at the event level, subjective and objective information is introduced. A subjectivity classifier is used to capture sentiment shift within events, aiming to improve in-domain performance and generalization ability of fake news detection. On this basis, textual features and sentiment shift features are fused to perform event-level fake news detection and enhance detection accuracy. The post-level loss and event-level loss are weighted and summed for backpropagation. Adversarial perturbations are added to the embedding layer of the post-level module to deceive the detector, enabling the model to better resist adversarial attacks and enhance its robustness and topic adaptability. Experiments are conducted on three real-world social media datasets, and the results show that the proposed method improves performance in both in-domain and cross-topic fake news detection. Specifically, the model attains accuracies of 91.3% on Twitter15, 90.4% on Twitter16, and 95.7% on Weibo, surpassing advanced baseline methods by 1.6%, 1.5%, and 1.1%, respectively.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040130
Explainable Artificial Intelligence for Workplace Mental Health Prediction
  • Nov 26, 2025
  • Informatics
  • Tsholofelo Mokheleli + 2 more

The increased prevalence of mental health issues in the workplace affects employees’ well-being and organisational success, necessitating proactive interventions such as employee assistance programmes, stress management workshops, and tailored wellness initiatives. Artificial intelligence (AI) techniques are transforming mental health risk prediction using behavioural, environmental, and workplace data. However, the “black-box” nature of many AI models hinders trust, transparency, and adoption in sensitive domains such as mental health. This study used the Open Sourcing Mental Illness (OSMI) secondary dataset (2016–2023) and applied four ML classifiers, Random Forest (RF), xGBoost, Support Vector Machine (SVM), and AdaBoost, to predict workplace mental health outcomes. Explainable AI (XAI) techniques, SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), were integrated to provide both global (SHAP) and instance-level (LIME) interpretability. The Synthetic Minority Oversampling Technique (SMOTE) was applied to address class imbalance. The results show that xGBoost and RF achieved the highest cross-validation accuracy (94%), with xGBoost performing best overall (accuracy = 91%, ROC AUC = 90%), followed by RF (accuracy = 91%). SHAP revealed that sought_treatment, past_mh_disorder, and current_mh_disorder had the most significant positive impact on predictions, while LIME provided case-level explanations to support individualised interpretation. These findings show the importance of explainable ML models in informing timely, targeted interventions, such as improving access to mental health resources, promoting stigma-free workplaces, and supporting treatment-seeking behaviour, while ensuring the ethical and transparent integration of AI into workplace mental health management.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040129
ETICD-Net: A Multimodal Fake News Detection Network via Emotion-Topic Injection and Consistency Modeling
  • Nov 25, 2025
  • Informatics
  • Wenqian Shang + 4 more

The widespread dissemination of multimodal disinformation, which combines inflammatory text with manipulated images, poses a severe threat to society. Existing detection methods typically process textual and visual features in isolation or perform simple fusion, failing to capture the sophisticated semantic inconsistencies commonly found in false information. To address this, we propose a novel framework: Emotion-Topic Injection and Consistency Detection Network (ETICD-Net). First, a large language model (LLM) extracts structured sentiment and topic-guided signals from news texts to provide rich semantic clues. Second, unlike previous approaches, this guided signal is injected into the feature extraction processes of both modalities: it enhances text features from BERT and modulates image features from ResNet, thereby generating sentiment-topic-aware feature representations. Additionally, this paper introduces a hierarchical consistency fusion module that explicitly evaluates semantic coherence among these enhanced features. It employs cross-modal attention mechanisms, enabling text to query image regions relevant to its statements, and calculates explicit dissimilarity metrics to quantify inconsistencies. Extensive experiments on the Weibo and Twitter benchmark datasets demonstrate that ETICD-Net outperforms or matches state-of-the-art methods, achieving accuracy and F1 scores of 90.6% and 91.5%, respectively.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040128
Learning Dynamics Analysis: Assessing Generalization of Machine Learning Models for Optical Coherence Tomography Multiclass Classification
  • Nov 22, 2025
  • Informatics
  • Michael Sher + 3 more

This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with an additional independent dataset used for external validation. Multiple modeling approaches were compared, including classical machine learning algorithms, convolutional neural networks with and without data augmentation, and a deep neural network using pre-trained feature extraction. Analysis of learning dynamics revealed that classical models and unaugmented convolutional neural networks exhibited overfitting and poor generalization, while models with data augmentation and the deep neural network showed healthy, parallel convergence of training and validation performance. Only the deep neural network demonstrated a consistent, monotonic decrease in accuracy, F1-score, and recall from training through external validation, indicating robust generalization. These results underscore the necessity of evaluating learning dynamics (not just summary metrics) to ensure model reliability and patient safety. Typically, model performance is expected to decrease gradually as data becomes less familiar. Therefore, models that do not exhibit these healthy learning dynamics, or that show unexpected improvements in performance on subsequent datasets, should not be considered for clinical application, as such patterns may indicate methodological flaws or data leakage rather than true generalization.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040125
An Adaptive Protocol Selection Framework for Energy-Efficient IoT Communication: Dynamic Optimization Through Context-Aware Decision Making
  • Nov 17, 2025
  • Informatics
  • Dmitrij Żatuchin + 1 more

The rapid growth of Internet of Things (IoT) deployments has created an urgent need for energy-efficient communication strategies that can adapt to dynamic operational conditions. This study presents a novel adaptive protocol selection framework that dynamically optimizes IoT communication energy consumption through context-aware decision making, achieving up to 34% energy reduction compared to static protocol selection. The framework is grounded in a comprehensive empirical evaluation of three widely used IoT communication protocols—MQTT, CoAP, and HTTP—using Intel’s Running Average Power Limit (RAPL) for precise energy measurement across varied network conditions including packet loss (0–20%) and latency variations (1–200 ms). Our key contribution is the design and validation of an adaptive selection mechanism that employs multi-criteria decision making with hysteresis control to prevent oscillation, dynamically switching between protocols based on six runtime metrics: message frequency, payload size, network conditions, packet loss rate, available energy budget, and QoS requirements. Results show MQTT consumes only 40% of HTTP’s energy per byte at high volumes (>10,000 messages), while HTTP remains practical for low-volume traffic (<10 msg/min). A novel finding reveals receiver nodes consistently consume 15–20% more energy than senders, requiring new design considerations for IoT gateways. The framework demonstrates robust performance across simulated real-world conditions, maintaining 92% of optimal performance while requiring 85% less computation than machine learning approaches. These findings offer actionable guidance for IoT architects and developers, positioning this work as a practical solution for energy-aware IoT communication in production environments.

  • New
  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040124
Leveraging the Graph-Based LLM to Support the Analysis of Supply Chain Information
  • Nov 13, 2025
  • Informatics
  • Peng Su + 2 more

Modern companies often rely on integrating an extensive network of suppliers to organize and produce industrial artifacts. Within this process, it is critical to maintain sustainability and flexibility by analyzing and managing information from the supply chain. In particular, there is a continuous demand to automatically analyze and infer information from extensive datasets structured in various forms, such as natural language and domain-specific models. The advancement of Large Language Models (LLM) presents a promising solution to address this challenge. By leveraging prompts that contain the necessary information provided by humans, LLM can generate insightful responses through analysis and reasoning over the provided content. However, the quality of these responses is still affected by the inherent opaqueness of LLM, stemming from their complex architectures, thus weakening their trustworthiness and limiting their applicability across different fields. To address this issue, this work presents a framework to leverage the graph-based LLM to support the analysis of supply chain information by combining the LLM and domain knowledge. Specifically, this work proposes an integration of LLM and domain knowledge to support an analysis of the supply chain as follows: (1) constructing a graph-based knowledge base to describe and model the domain knowledge; (2) creating prompts to support the retrieval of the graph-based models and guide the generation of LLM; (3) generating responses via LLM to support the analysis and reason about information across the supply chain. We demonstrate the proposed framework in the tasks of entity classification, link prediction, and reasoning across entities. Compared to the average performance of the best methods in the comparative studies, the proposed framework achieves a significant improvement of 59%, increasing the ROUGE-1 F1 score from 0.42 to 0.67.

  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040121
Digital Competencies for a FinTech-Driven Accounting Profession: A Systematic Literature Review
  • Nov 6, 2025
  • Informatics
  • Saiphit Satjawisate + 3 more

Financial Technology (FinTech) is fundamentally reshaping the accounting profession, accelerating the shift from routine transactional activities to more strategic, data-driven functions. This transformation demands advanced digital competencies, yet the scholarly understanding of these skills remains fragmented. To provide conceptual and analytical clarity, this study defines FinTech as an ecosystem of enabling technologies, including artificial intelligence, data analytics, and blockchain, that collectively drive this professional transition. Addressing the lack of systematic synthesis, the study employs a systematic literature review (SLR) guided by the PRISMA 2020 framework, complemented by bibliometric analysis, to map the intellectual landscape. The review focuses on peer-reviewed journal articles published between January 2020 and June 2025, thereby capturing the accelerated digital transformation of the post-pandemic era. The analysis identifies four dominant thematic clusters: (1) the professional context and digital transformation; (2) the educational response and curriculum development; (3) core competencies and their technological drivers; and (4) ethical judgement and professional responsibilities. Synthesising these themes reveals critical research gaps in faculty readiness, curriculum integration, ethical governance, and the empirical validation of institutional strategies. By offering a structured map of the field, this review contributes actionable insights for educators, professional bodies, and firms, and advances a forward-looking research agenda to align professional readiness with the realities of the FinTech era.

  • Open Access Icon
  • Research Article
  • 10.3390/informatics12040122
Percolation–Stochastic Model for Traffic Management in Transport Networks
  • Nov 6, 2025
  • Informatics
  • Anton Aleshkin + 2 more

This article describes a model for optimizing traffic flow control and generating traffic signal phases based on the stochastic dynamics of traffic and the percolation properties of transport networks. As input data (in SUMO), we use lane-level vehicle flow rates, treating them as random processes with unknown distributions. It is shown that the percolation threshold of the transport network can serve as a reliability criterion in a stochastic model of lane blockage and can be used to determine the control interval. To calculate the durations of permissive control signals and their sequence for different directions, vehicle queues are considered and the time required for them to reach the network’s percolation threshold is estimated. Subsequently, the lane with the largest queue (i.e., the shortest time to reach blockage) is selected, and a phase is formed for its signal control, as well as for other lanes that can be opened simultaneously. Simulation results show that when dynamic traffic signal control is used and a percolation-dynamic model for balancing road traffic is applied, lane occupancy indicators such as “congestion” decrease by 19–51% compared to a model with statically specified traffic signal phase cycles. The characteristics of flow dynamics obtained in the simulation make it possible to construct an overall control quality function and to assess, from the standpoint of traffic network management organization, an acceptable density of traffic signals and unsignalized intersections.