Discovery Logo
Sign In
Search
Paper
Search Paper
Pricing Sign In
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link

Related Topics

  • Sensor Fusion Algorithm
  • Sensor Fusion Algorithm
  • Multi-sensor System
  • Multi-sensor System
  • Vehicle Sensors
  • Vehicle Sensors

Articles published on Sensor fusion

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5338 Search results
Sort by
Recency
  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.measurement.2026.120695
ROS2-based real-time autonomous mapping and navigation: Integrating visual SLAM and sensor fusion with performance analysis under varying light
  • Apr 1, 2026
  • Measurement
  • Md Musfiqur Rahman + 5 more

ROS2-based real-time autonomous mapping and navigation: Integrating visual SLAM and sensor fusion with performance analysis under varying light

  • New
  • Research Article
  • 10.1016/j.foodchem.2026.147879
Flavor perception in the oral processing of mixed grain foods: Flavor release and AI.
  • Apr 1, 2026
  • Food chemistry
  • Feng Liang + 6 more

Flavor perception in the oral processing of mixed grain foods: Flavor release and AI.

  • Research Article
  • 10.1088/2057-1976/ae4c93
Multimodal wearable sensor-based stress detection: machine learning pipeline with systematic feature selection and key biomarker insights
  • Mar 12, 2026
  • Biomedical Physics & Engineering Express
  • Shao Ming Ng + 2 more

The increasing awareness of stress-related health impacts has driven demand for accurate, non-invasive stress detection methods, particularly those leveraging wearable sensors. While multimodal sensing approaches have shown promise in enhancing mental stress assessment, the critical role of feature selection in optimizing model performance remains underexplored. This study presents a comprehensive machine learning pipeline for mental stress detection that integrates data preprocessing, feature extraction, systematic feature selection, and classification. Using data collected from 17 participants, we classified stress and relaxation states based on three physiological signals: electrodermal activity (EDA), electrocardiography (ECG), and electroencephalography (EEG). Multimodal sensor fusion was compared against unimodal approaches to assess performance improvements. To identify the most informative features and improve model accuracy, we applied four feature selection methods: Analysis of Variance (ANOVA), Chi-squared (Chi2), Kruskal-Wallis (KW), and Minimum Redundancy Maximum Relevance (MRMR). External validation was conducted using the public Stress Recognition in Automobile Drivers (SRAD) dataset. Our results demonstrated a 12.9% increase in classification accuracy using multimodal data, reaching up to 95.9%, with feature selection contributing an average gain of 4.8%. Among the methods, Chi2 consistently achieved the highest mean accuracy across various feature sets. Key biomarkers included ECG-based median, mean, and root-mean-square; EEG-based beta-to-alpha ratio and relative alpha power; and EDA-based mean and sum phasic activity. These findings highlight the importance of integrating systematic feature selection with multimodal sensor data to enhance the accuracy, robustness, and interpretability of mental stress detection systems.

  • Research Article
  • 10.1038/s41598-026-42310-8
Sleep awake detection from leg-worn wearables using deep sensor fusion.
  • Mar 12, 2026
  • Scientific reports
  • Yumna Anwar + 6 more

Restful sleep is essential for health, yet many children with Attention Deficit Hyperactivity Disorder (ADHD) experience disturbances such as delayed sleep onset, shorter total sleep time, frequent awakenings, and daytime fatigue. Accurate detection of these issues is important for clinical care, but existing tools have limitations: polysomnography is costly and complex, while wrist devices often miss subtle movement or physiological changes. This study introduces a deep learning approach using data from RestEaze, a leg-worn multimodal wearable that records photoplethysmography (PPG), motion from accelerometer and gyroscope, and temperature signals. Overnight recordings were collected from 14 children referred for ADHD evaluation. A Support Vector Machine (SVM) using handcrafted features was implemented to establish a traditional baseline. Two convolutional neural network (CNN-BiLSTM) models were then developed, employing early and late-fusion of raw multimodal inputs to classify sleep and wake states in short windows. The late-fusion model achieved an area under the ROC curve of 90.94% in five-fold cross-validation. Derived metrics included total sleep time, wake after sleep onset, sleep onset latency, and awakenings. A temporal label-smoothing method further improved consistency. These findings demonstrate the feasibility of leg-based multimodal sensing and deep learning for noninvasive sleep monitoring in pediatric neurodevelopmental populations.

  • Research Article
  • 10.3390/futuretransp6020064
Integrated Multimodal Perception and Predictive Motion Forecasting via Cross-Modal Adaptive Attention
  • Mar 11, 2026
  • Future Transportation
  • Bakhita Salman + 2 more

Accurate environmental perception is fundamental to safe autonomous driving; however, most existing multimodal systems rely on fixed or heuristic sensor fusion strategies that cannot adapt to scene-dependent variations in sensor reliability. This paper proposes Cross-Modal Adaptive Attention (CMAA), a unified end-to-end Bird’s-Eye-View (BEV) perception framework that dynamically fuses camera, LiDAR, and RADAR information through learnable, context-aware modality gating. Unlike static fusion approaches, CMAA adaptively reweights sensor contributions based on global scene descriptors, enabling the robust integration of semantic, geometric, and motion cues without manual tuning. The proposed architecture jointly performs 3D object detection, multi-object tracking, and motion forecasting within a shared BEV representation, preserving spatial alignment across tasks and supporting efficient real-time deployment. Experiments conducted on the official nuScenes validation split demonstrate that CMAA achieves 0.528 mAP and 0.691 NDS, outperforming fixed-weight fusion baselines while maintaining a compact model size and efficient inference. Additional tracking evaluation using the official nuScenes tracking devkit reports improved tracking performance, while motion forecasting experiments show reduced trajectory displacement errors (minADE and minFDE). Ablation studies further confirm the complementary contributions of adaptive modality gating and bidirectional cross-modal refinement, and a stratified dynamic analysis reveals consistent reductions in velocity estimation error across object classes, motion regimes, and environmental conditions. These results demonstrate that adaptive multimodal fusion improves robustness, motion reasoning, and perception reliability in complex traffic environments while remaining computationally efficient for deployment in safety-critical autonomous driving systems.

  • Research Article
  • 10.38124/ijisrt/26mar007
Explainable Deep Neural Networks for Neurological Disorder Classification: A Focus on Parkinson’s Disease Tremor Analysis Via Wearable Sensor Fusion
  • Mar 10, 2026
  • International Journal of Innovative Science and Research Technology
  • Utsha Sarker + 4 more

Parkinson's disease (PD) is a progressive degenerating disease with motor manifestations that include resting tremor, bradykinesia and rigidity. Accurate assessment of tremor is still very much reliant on subjective clinical scale criteria (Unified Parkinson's Disease Rating Scale (UPDRS)) that may not represent minor fluctuations or real world variability. This restriction highlights an obvious lack of objective, ongoing and comprehended tremor monitoring solutions to help with early diagnosis and personalized disease management. In this study, an explainable deep learning framework used for tremor classification based on wearable sensor fusion is proposed. Wrist worn accelerometer signals and gyroscope signals are fused and processed using the hybrid system of 1-D Convolution Neural Networks (1-D CNN) merged with the Bidirectional Long Short-Term Memory (BiLSTM) which can capture both the local motion patterns and long-term temporal dependencies. More transparency is provided by explainable AI methods (Grad-CAM and SHAP) to explain important segments of the time, as well as the role of the sensor in the prediction of the model. Experiments were performed on a data set containing 62 subjects (38 PD patients and 24 healthy subject), which was recorded at 100 Hz. Signals were divided in 1.28 sec overlapped t ime windows, and were labeled as tremor or not tremor, PD or control. The proposed model showed 94.3%, 93.8%, 0.96 AUC, 92.5%, 95.1% accuracy, F1-score, sensitivity, and specificity, respectively, which are much better than conventional CNN and SVM approximations with accuracy improvements of 6-9%. Explainability analysis showed an overriding influence of tremor-related oscillatory components (4-6 Hz) in the prediction, which led to clinically meaningful explainability of the predictions, and gives extra confidence to the model for use in the real world.

  • Research Article
  • 10.3390/s26051659
Survey on Reconnaissance Autonomous Robotic Systems for Disaster Management.
  • Mar 5, 2026
  • Sensors (Basel, Switzerland)
  • Sahaj Sinha + 2 more

Systems that operate in dangerous environments are becoming essential in case of emergencies. This survey reviews the latest ground reconnaissance robots using computer vision (CV), machine learning (ML), MCU-based control, LoRa communication, DC motors, and dual-power systems. The analysis includes hardware and algorithms, and their performance in the field and lab. There has been clear progress in navigation, sensor fusion, and situational awareness. The main challenges which remain include the use of energy and standardization of benchmarks. This survey focuses exclusively on Unmanned Ground Vehicles (UGVs) for disaster reconnaissance, examining recent advances in hardware, software, and autonomy. The survey highlights the improvements in navigation, sensor fusion, and intelligence, and identifies remaining challenges such as energy limitations, robustness in harsh conditions, and the lack of standardized benchmarks. The analysis synthesizes findings from over 190 recent studies (2020-2025) in ground-based disaster robotics, providing a comprehensive overview of current capabilities and research gaps. It encapsulates all issues with their remedy for future disaster-response systems.

  • Research Article
  • 10.1088/1361-6501/ae400d
Improved blind-spot object estimation via camera–LiDAR sensor fusion with IMM‐KF incorporating error characteristics
  • Mar 5, 2026
  • Measurement Science and Technology
  • Min Gyu Kim + 1 more

Abstract Extending sensor detection range and improving perception accuracy are critical for achieving high levels of safety and reliability in autonomous driving systems, particularly for mitigating sensor blind-spots. In this paper, we propose a camera-3D Light Detection and Ranging (LiDAR) sensor fusion method that leverages road convex mirrors to achieve high-accuracy estimation of blind-spot objects that cannot be directly perceived by onboard sensors. The approach begins with sensor calibration, followed by the use of a segmentation-based deep learning detector to identify blind-spot objects and a data association process to refine detection results. To address distortion and estimation errors caused by convex mirror reflections, we incorporate an Interacting Multiple Model-Kalman Filter (IMM-KF) based on the error characteristics derived from the association process. The proposed method was validated through scenario-based experiments. Experimental results demonstrate that the proposed sensor data fusion method outperforms conventional methods in object estimation under the complex maneuvers of the blind-spot object.

  • Research Article
  • 10.3390/s26051660
Design and Performance Validation of 4D Radar ICP-Integrated Navigation with Stochastic Cloning Augmentation.
  • Mar 5, 2026
  • Sensors (Basel, Switzerland)
  • Hyeongseob Shin + 2 more

Automotive radar has emerged as a pivotal technology for navigation in GNSS-denied environments, offering superior robustness to adverse weather and fluctuating lighting conditions compared to vision or LiDAR-based sensors. Despite these advantages, the inherent sparsity and noise of radar measurements often lead to degraded estimation accuracy and system reliability. To address these challenges, various radar-based localization frameworks have been explored, ranging from optimization-based and Extended Kalman Filter (EKF) approaches fused with Inertial Measurement Units (IMUs) to point cloud registration techniques like Iterative Closest Point (ICP). While filter-based methods are favored in multi-sensor fusion for their proven stability, ICP is widely utilized for high-precision pose estimation in point-cloud-centric systems. In this study, we propose a novel Radar-Inertial Odometry (RIO) framework that synergistically integrates ICP-based relative pose estimation with model-based sensor fusion. The proposed methodology leverages relative transformations derived from ICP alongside ego-velocity estimations obtained from radar Doppler measurements. To effectively incorporate relative ICP constraints, a stochastic cloning technique is implemented to augment previous states and their associated covariances, ensuring that the uncertainty of historical poses is explicitly accounted for. The performance of the proposed method is validated using public open-source datasets, demonstrating higher localization accuracy and more consistent performance compared to existing algorithms used for comparison.

  • Research Article
  • 10.17780/ksujes.1830531
TECHNOLOGICAL AND PSYCHOSOCIAL DIMENSIONS OF AGGRESSIVE DRIVING AND ROAD RAGE: A PERSPECTIVE BASED ON INTELLIGENT TRANSPORTATION SYSTEMS, ARTIFICIAL INTELLIGENCE, AND SOCIETAL IMPACTS
  • Mar 3, 2026
  • Kahramanmaraş Sütçü İmam Üniversitesi Mühendislik Bilimleri Dergisi
  • Özgür Karaduman

Traffic safety is a multidimensional field shaped not only by infrastructure and vehicles but also by driver behavior and its technological mediation within Intelligent Transportation Systems (ITS). Among these behaviors, aggressive driving and road rage are critical phenomena that directly affect safety and traffic flow. This review examines psychological and societal determinants such as anger, stress, impatience, and cultural norms; Artificial Intelligence (AI) and Machine Learning (ML)-based detection and management approaches using CAN-bus, biometric, and video data; and Advanced Driver Assistance Systems (ADAS) and ITS applications. The study explains how behavioral evidence supports AI-based risk assessment, early-warning mechanisms, and ITS applications by enhancing situational awareness and adaptive response. It also discusses the growing role of data-driven analytics and sensor fusion in predicting and mitigating risky driving patterns in conventional and connected vehicle environments. Recent research shows that aligning human factors insights with model design enhances reliability and supports adaptive, real-time safety functions. The review provides practical implications for researchers and policymakers regarding intelligent behavior modeling, ethical data use, and human-centric ITS design, and highlights future research areas such as cross-cultural analyses, biometric-aware modeling, and human–autonomy interaction in next-generation mobility systems.

  • Research Article
  • 10.1016/j.ress.2025.111960
A cascaded machine learning model for identification of ship breach characteristics during flooding using sensor data fusion
  • Mar 1, 2026
  • Reliability Engineering & System Safety
  • Jiayu Diao + 2 more

A cascaded machine learning model for identification of ship breach characteristics during flooding using sensor data fusion

  • Research Article
  • 10.1016/j.jwpe.2026.109751
Feature-DTW and Mel-Spectrogram-based sensor fusion: A framework for leak detection and classification in water distribution systems
  • Mar 1, 2026
  • Journal of Water Process Engineering
  • Matheus Medeiros Donatoni + 5 more

Water leak detection is essential to prevent water, energy, and financial losses while ensuring a reliable supply. In this context, the present work uses a laboratory-based leak dataset and proposes an efficient framework for leak detection and classification, combining advanced feature engineering with sensor fusion and machine learning techniques. The method employs Dynamic Time Warping (DTW) for comparison with a reference-instance strategy within a refined Feature-DTW framework that uses Mel-Spectrograms for time-series representation. This approach reduces computational cost through low-dimensional spectrogram representation and enhances feature richness by extracting attributes from the DTW alignment path. Three sensor fusion strategies are proposed and evaluated, integrating multi-sensor data with trade-offs between accuracy and computational efficiency. Using the extreme gradient boosting algorithm, the framework achieves high performance in both leak detection and leak type classification. The method attained an accuracy of 0.991 for leak detection and 0.984 for leak type classification — matching state-of-the-art results while applying multi-sensor techniques and maintaining a computational footprint suitable for real-time deployment on edge devices. The results demonstrate that spectrogram-based Feature-DTW, when combined with targeted reference instances and efficient fusion strategies, is a powerful and scalable approach for accurate leak detection and classification. • A sampling strategy that designates reference instances for similarity comparisons. • A pre-processing pipeline for optimal frequency filtering. • Mel-Spectrogram as a computationally efficient representation of time series data. • Proposal and comparative testing of three distinct sensor fusion methods via DTW. • Expanded feature set through incorporation of DTW warping path attributes.

  • Research Article
  • 10.1088/2631-8695/ae4a6d
Fingerprint recognition system using score-level fusion of multi-sensor datasets
  • Mar 1, 2026
  • Engineering Research Express
  • Jwan Abdulkhaliq Mohammed + 1 more

Abstract Score-level fusion in multisensor fingerprint recognition is a technique that combines the individual confidence scores from different fingerprint sensors or matching algorithms to produce a single, more reliable final score for a person's identity. In this paper, the performances of fingerprint recognition systems using SDUMLA-HMT multisensor database acquired by FPR620 and FT-2BU sensors are compared, and a score-level fusion approach is proposed to leverage their strengths for person authentication. To this end, an end-to-end fingerprint recognition pipeline is designed using minutiae points for implementing both unimodal and multisensor fusion systems for person identification. Identical methods are applied, including preprocessing, minutiae extraction via the crossing number method, and Multiple SVM classification. Experimental findings show FPR620 outperforms FT-2BU, achieving accuracy (0.97 with EER = 0.03), recall (0.9722), precision (0.9743), F1-score (0.9732), and AUC (0.98) vs. accuracy (0.921 with EER = 0.079), recall (0.9178), precision (0.9236), F1-score (0.9206), and AUC (0.94). Notably, multisensor fusion yields improved performance: accuracy (0.984 with EER = 0.016), recall (0.9825), precision (0.9844), F1-score (0.9834), and AUC (0.99) using max rule and accuracy (0.979 with EER = 0.021), recall (0.974), precision (0.9785), F1-score (0.9762), and AUC (0.987) using weighted sum. These findings confirm FPR620's superiority and demonstrate sensor fusion's potential benefit.

  • Research Article
  • 10.1016/j.ymssp.2026.114016
Enhanced H∞ loop-shaping control with virtual sensor fusion for active seismic vibration isolation
  • Mar 1, 2026
  • Mechanical Systems and Signal Processing
  • Xiaoqi Yin + 6 more

Enhanced H∞ loop-shaping control with virtual sensor fusion for active seismic vibration isolation

  • Research Article
  • 10.1177/17298806261432728
Distributed cooperative simultaneous localization and mapping for dense micro-robot swarms: A stigmergic approach with hardware-constrained sensor fusion
  • Mar 1, 2026
  • International Journal of Advanced Robotic Systems
  • Le M Triet + 1 more

To address the challenge of deploying dense micro-robot swarms where classical simultaneous localization and mapping (SLAM) methods are computationally infeasible, we propose a hardware-constrained, stigmergic cooperative SLAM framework. Our system enables swarms to map unknown environments in real time, without a central coordinator or high-bandwidth links. Our method introduces five novel components: (i) Stigmergic Counter-Consensus—a bounded, monotone, and bandwidth-frugal consensus rule over occupancy counters; (ii) ATOP-Raycast—an Adaptive Thin-Obstacle-Preserving Bresenham variant with probabilistic endpoint diffusion; (iii) Proximal Delta Encoding of map updates using tilewise run-length and majority masks; (iv) a Budget-Aware extended Kalman filter that codesigns fusion rate and numerical precision with MCU limits; and (v) a Tri-Force Frontier-Cohesion controller yielding emergent exploration while maintaining communication neighborhoods. In real-world validation with 40 robots, the framework achieves a thin-feature retention rate of 92.4% and a final map Intersection-over-Union (IoU) of 0.89. This performance is sustained with a minimal communication overhead of ∼110 bytes per packet, demonstrating near-linear scalability on ESP32-class hardware while preserving critical geometry. We provide algorithmic details, complexity bounds, convergence guarantees, and validate our approach through a comprehensive suite of simulations. Together, these yield near-linear scalability to 40 + robots at 20 Hz on ESP32-class hardware, preserve thin obstacles, and achieve low collision rates with modest communication. We provide algorithmic details, complexity bounds, convergence guarantees, and validate our approach through a comprehensive suite of simulations.

  • Research Article
  • 10.1016/j.atech.2026.101846
Adaptive GNSS–UWB sensor fusion for reliable localization in precision agriculture
  • Mar 1, 2026
  • Smart Agricultural Technology
  • Anas Osman + 3 more

Adaptive GNSS–UWB sensor fusion for reliable localization in precision agriculture

  • Research Article
  • 10.1016/j.biosx.2025.100735
A review on the impact of AI-enabled thermal imaging and IoT sensor fusion on early detection of mastitis in dairy cattle
  • Mar 1, 2026
  • Biosensors and Bioelectronics: X
  • Arjun Asogan + 6 more

A review on the impact of AI-enabled thermal imaging and IoT sensor fusion on early detection of mastitis in dairy cattle

  • Research Article
  • 10.61173/zpz0b409
A Review of Multimodal Sensor Technologies and Fusion Methods for Intelligent Robots
  • Feb 28, 2026
  • Science and Technology of Engineering, Chemistry and Environmental Protection
  • Ziyu Xie

With the rapid development of artificial intelligence and robotics, intelligent robots are gradually transitioning from structured environments to open and complex scenarios, posing unprecedented requirements for the depth, breadth, and precision of environmental perception. Single-modal sensors can no longer satisfy the requirements of complex tasks, and multimodal sensor fusion technology has become a key approach to enhance the robot’s environmental perception, state estimation, and decision-making capabilities. This paper presents a systematic review of multimodal sensing technologies and fusion methodologies for intelligent robots. First, the paper outlines the principles and advances of various core sensors; then it delves into key technologies from signal preprocessing to fusion algorithms spanning classical filtering and deep learning; and it synthesizes their application performance in typical scenarios such as navigation, operation, and human-robot collaboration. Finally, confronting current challenges, the paper envisions the future trends of sensing technologies towards intelligence, flexibility, and chipbased development, aiming to offer insights for researchers in the field.

  • Research Article
  • 10.52939/ijg.v22i2.4785
Analysis of GPS/IMU Sensor Fusion to Improve Mapping Accuracy on UAV Quadrotor Using LiDAR Technology
  • Feb 28, 2026
  • International Journal of Geoinformatics
  • M.N Cahyadi

Unmanned Aerial Vehicles (UAVs) play a crucial role in navigation, requiring accurate sensors to determine position, speed, and orientation, especially in unknown environments. Direct navigation systems like the Global Positioning System (GPS) provide positional data, while indirect systems, such as the Inertial Measurement Unit (IMU), integrate accelerometer and gyroscope data to supply speed and orientation information. This study investigates the integration of GPS and IMU sensors using the Unscented Kalman Filter (UKF) to improve localization accuracy on a cost-effective UAV Quadrotor equipped with LiDAR Livox. The research methodology involved collecting raw data from GPS, IMU, and LiDAR sensors during UAV flights. These data were processed using a UKF-based mathematical model to fuse sensor inputs and generate accurate point cloud models. Results show that the UKF fusion method achieved a localization accuracy of 0.403 m, with maximum residuals recorded as 1.332 m for the X axis, 20.421 m for the Y axis

  • Research Article
  • 10.3390/technologies14030143
Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency
  • Feb 27, 2026
  • Technologies
  • Ganiyat Salawu + 1 more

The rapid evolution of Artificial Intelligence (AI) has significantly transformed the capabilities, performance, and autonomy of modern mechatronic systems. As industries transition toward intelligent and interconnected manufacturing environments, AI has emerged as a powerful enabler of real-time decision-making, adaptive control, predictive maintenance, and autonomous operation. This review provides a comprehensive analysis of AI integration within mechatronic systems, examining its influence on system performance, autonomy, and manufacturing efficiency. Key AI techniques including machine learning, deep learning, reinforcement learning, evolutionary optimization, and computer vision are evaluated in terms of their applications in control, sensing, diagnostics, and robotics. The paper also highlights advancements in AI-driven motion control, autonomous navigation, sensor fusion, and smart factory operations. Critical challenges such as data requirements, computational constraints, system interoperability, and safety concerns are discussed to identify research gaps. Finally, emerging trends and future directions, such as edge AI, digital twins, explainable AI, and fully autonomous mechatronic cells, are explored. This review consolidates current knowledge and provides insights to guide researchers and practitioners in developing next-generation intelligent mechatronic systems capable of supporting the demands of Industry 4.0 and beyond.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers