• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Social Robots
  • Social Robots
  • Intelligent Robot
  • Intelligent Robot
  • Robot Interaction
  • Robot Interaction
  • Service Robots
  • Service Robots
  • Domestic Robots
  • Domestic Robots
  • Humanoid Robot
  • Humanoid Robot

Articles published on Robot perception

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
830 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.3389/frobt.2025.1698333
Ecg2o: a seamless extension of g2o for equality-constrained factor graph optimization
  • Jan 20, 2026
  • Frontiers in Robotics and AI
  • Anas Abdelkarim + 2 more

Factor graph optimization serves as a fundamental framework for robotic perception, enabling applications such as pose estimation, simultaneous localization and mapping (SLAM), structure-from-motion (SfM), and situational modeling. Traditionally, these methods solve unconstrained least squares problems using algorithms such as Gauss-Newton and Levenberg-Marquardt. However, extending factor graphs with native support for hard equality constraints can yield more accurate state estimates and broaden their applicability, particularly in planning and control. Prior work has addressed equality handling either by soft penalties (large weights) or by nested-loop Augmented Lagrangian (AL) schemes. In this paper, we propose a novel extension of factor graphs that seamlessly incorporates hard equality constraints without requiring additional optimization techniques. Our approach maintains the efficiency and flexibility of existing second-order optimization techniques while ensuring constraint satisfaction. To validate the proposed method, an autonomous-vehicle velocity-tracking optimal control problem is solved and benchmarked against an AL baseline, both implemented in g2o. Additional comparisons are conducted in GTSAM, where the penalty method and AL are evaluated against our g2o implementations. Moreover, we introduce ecg2o , a header-only C++ library that extends the widely used g2o library with full support for hard equality-constrained optimization. This library, along with demonstrative examples and the optimal control problem, is available as open source at https://github.com/snt-arg/ecg2o .

  • New
  • Research Article
  • 10.1002/adma.202521375
Recent Progress on Flexible Multimodal Sensors: Decoupling Strategies, Fabrication and Applications.
  • Jan 19, 2026
  • Advanced materials (Deerfield Beach, Fla.)
  • Tao Wu + 10 more

Flexible multimodal sensors have garnered significant attention in research areas such as electronic skin, advanced robotics, and personalized health monitoring due to their ability to leverage the complementary advantages of diverse sensing units, thereby a primary decoupling strategy exploits differences in the fundamental types of signals generated. Nevertheless, flexible multimodal sensors persistently face challenges like signal crosstalk and complex integration processes, which constrain their performance. This review delineates recent advances in flexible multimodal sensor decoupling through fundamental material design guided by physical principles, structural design, and AI-driven signal decoupling architectures. Additionally, we explore the various applications of flexible multimodal sensors, encompassing environmental monitoring, physiological health tracking, human-machine interaction, and robotic perception. Finally, the conclusion, challenges, and future perspectives for next-generation flexible multimodal sensing systems are discussed.

  • New
  • Research Article
  • 10.3389/frobt.2025.1737028
Perceptions of the Furhat social robot administering a mental health assessment: a pilot mixed-method exploration
  • Jan 8, 2026
  • Frontiers in Robotics and AI
  • Paulina Tsvetkova

In the era of artificial intelligence and rapidly advancing robotics, the field of Human–Robot Interaction (HRI) has taken center stage across multiple domains, including psychology. From a psychological perspective, it is therefore essential to deepen our understanding of the factors that shape the quality of these interactions and their implications. This emphasis also aligns with the principles of Industry 5.0, which prioritize human well-being and use technologies to promote sustainable progress. The present study employs an exploratory mixed-method approach and aims to examine perceptions of warmth, competence and discomfort with the Furhat social robot in a psychological assessment setting. Specifically, we investigated young adults’ interactions with the Furhat social robot while it administered the Depression, Anxiety and Stress Scale (DASS-21). Following the interaction, the participants completed the short version of the Robot Social Attributes Scale (RoSAS-SF) to assess perceived warmth, competence and discomfort, and provided qualitative feedback regarding their interactional experiences and acceptance of the robot. The findings provide preliminary insights into the respondents’ perceptions of and openness toward robot-administered psychological screening, suggesting that the Furhat social robot may have potential as an assistive tool in mental health assessment contexts. These results highlight the need for further research with larger samples to examine the role of social robots in psychological practice more comprehensively.

  • New
  • Research Article
  • 10.1186/s40359-025-03655-3
Perceiving minds in machines: how perceived theory of mind in robots influences human–robot empathy through the lens of mind perception theory
  • Dec 29, 2025
  • BMC Psychology
  • Ruolin Fan + 3 more

BackgroundWith the rapid advancements in emotion processing algorithm within artificial intelligence, it is essential to explore the evolving relationships between humans and robots. This exploration can prepare society for the future widespread application of social robots and address the new social dynamics involving AI agents. Human–robot empathy emerges as a crucial avenue for exploring the emotional connections between humans and robots. The purpose of this study was to investigate the impact of users’ perceptions of robots’ minds on human–robot empathy.MethodsThis study manipulated perceived theory of mind (ToM) in robots through human–robot interaction scenarios, utilizing four experiments to assess the effects of perceived ToM in robots—categorized as cognitive ToM (cToM) in Experiments 1a and 2a, and affective ToM (aToM) in Experiments 1b and 2b—on human–robot empathy, including pain empathy and empathic concern. Experiments 1a and 1b examined the influence of perceived ToM in robots on human–robot empathy within classic ToM scenarios, while Experiments 2a and 2b were conducted within real service contexts, further investigating the mediating role of users’ mind perceptions of robots.FindingsFirst, perceiving a robot with high aToM significantly enhanced users’ pain empathy and empathic concern towards robots, with the experience dimension of mind perception potentially serving as an indirect-only mediator in this relationship. Second, in real home service scenarios in Experiment 2, while the total effect of high cToM on empathic concern was not statistically significant after multiple comparisons correction, mediation analysis revealed a significant negative direct effect alongside a positive indirect effect through agency. This pattern suggests that perceiving high cToM may simultaneously inhibit empathic concern directly while potentially fostering it through enhanced agency perception.ConclusionThe findings demonstrate that perceived aToM in robots consistently enhances human–robot emotional interactions, while revealing a more complex dual-pathway mechanism for cToM effects. These results provide valuable insights into how distinct dimensions of mind perception shape human–robot relationships.

  • New
  • Research Article
  • 10.3390/robotics15010008
RA6D: Reliability-Aware 6D Pose Estimation via Attention-Guided Point Cloud in Aerosol Environments
  • Dec 29, 2025
  • Robotics
  • Woojin Son + 4 more

We address the problem of 6D object pose estimation in aerosol environments, where RGB and depth sensors experience correlated degradation due to scattering and absorption. Handling such spatially varying degradation typically requires depth restoration, but obtaining ground-truth complete depth in aerosol conditions is prohibitively expensive. To overcome this limitation without relying on costly depth completion, we propose RA6D, a framework that integrates attention-guided reliability modeling with feature distillation. The attention map generated during RGB dehazing reflects aerosol distribution and provides a compact indicator of depth reliability. By embedding this attention as an additional feature in an Attention-Guided Point cloud (AGP), the network can adaptively respond to spatially varying degradation. In addition, to address the scarcity of aerosol-domain data, we employ clean-to-aerosol feature distillation, transferring robust representations learned under clean conditions. Experiments on aerosol benchmarks show that RA6D achieves higher accuracy and significantly faster inference than restoration-based pipelines, offering a practical solution for real-time robotic perception under severe visual degradation.

  • Research Article
  • 10.1007/s42452-025-08130-7
Outdoor perception of robots based on SLAM technology and binocular vision positioning technology
  • Dec 22, 2025
  • Discover Applied Sciences
  • Liye Liu

Outdoor perception of robots based on SLAM technology and binocular vision positioning technology

  • Research Article
  • 10.1002/admt.202501862
Skin‐Inspired, High‐Sensitive, Nanocrack‐Based Flexible Three‐Directional Force Sensor for Soft Robots’ Sensing
  • Dec 16, 2025
  • Advanced Materials Technologies
  • Chi Zhang + 6 more

Abstract Three‐directional flexible force sensors capable of distinguishing and detecting normal and shear forces are crucial for object perception in soft robotics. Inspired by the spinosum and mechanoreceptors in human skin for three‐directional force sensing, a three‐directional flexible force sensor is proposed by using a pillar as the spinosum and employing circumferentially arranged nanocrack‐based strain sensing units as ultra‐sensitive mechanoreceptors. The sensor can discern normal and shear forces by analyzing deformations of four sensing units. Sensitivity for normal and shear forces of the sensor is 12.23/N and 0.21/N, respectively, within ranges of 0–0.41 N and 0–0.75 N. By integrating the sensor into a soft gripper, a smart soft gripper is constructed. The smart soft gripper can perceive not only the contact and separation between the object and gripper through normal force detection, but also the sliding and landing of the gripped object through shear force detection.

  • Research Article
  • 10.3389/frobt.2025.1728647
Evaluating human perceptions of android robot facial expressions based on variations in instruction styles
  • Dec 16, 2025
  • Frontiers in Robotics and AI
  • Ayaka Fujii + 7 more

Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot’s behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human–robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.

  • Research Article
  • 10.3390/act14120614
Advanced Servo Control and Adaptive Path Planning for a Vision-Aided Omnidirectional Launch Platform in Sports-Training Applications
  • Dec 15, 2025
  • Actuators
  • Shuai Wang + 5 more

A system-level scheme that couples a multi-dimensional attention-fused vision model and an improved Dijkstra planner is proposed for basketball robots in complex scenes. Fast-moving object detection, cluttered background recognition, and real-time path decision are targeted. For vision, the proposed YOLO11 with Multi-dimensional Attention Fusion (YOLO11-MAF) is equipped with four modules: Coordinate Attention (CoordAttention), Efficient Channel Attention (ECA), Multi-Scale Channel Attention (MSCA), and Large-Separable Kernel Attention (LSKA). Detection accuracy and robustness for high-speed basketballs are raised. For planning, an improved Dijkstra algorithm is proposed. Binary heap optimization and heuristic fusion cut time complexity from O(V2) to O((V+E)logV). Redundant expansions are removed and planning speed is increased. A complete robot platform integrating mechanical, electronic, and software components is constructed. End-to-end experiments show the improved vision model raises mAP@0.5 by 0.7% while keeping real-time frames per second (FPS). The improved path planning algorithm cuts average compute time by 16% and achieves over 95% obstacle avoidance success. The work offers a new approach for real-time perception and autonomous navigation of intelligent sport robots. It lays a basis for future multi-sensor fusion and adaptive path planning research.

  • Research Article
  • 10.3390/s25247574
Targetless Radar–Camera Calibration via Trajectory Alignment
  • Dec 13, 2025
  • Sensors (Basel, Switzerland)
  • Ozan Durmaz + 1 more

Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This study presents a fully targetless calibration framework that estimates the rigid spatial transformation between radar and camera coordinate frames by aligning their observed trajectories of a moving object. The proposed method integrates You Only Look Once version 5 (YOLOv5)-based 3D object localization for the camera stream with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Sample Consensus (RANSAC) filtering for sparse and noisy radar measurements. A passive temporal synchronization technique, based on Root Mean Square Error (RMSE) minimization, corrects timestamp offsets without requiring hardware triggers. Rigid transformation parameters are computed using Kabsch and Umeyama algorithms, ensuring robust alignment even under millimeter-wave (mmWave) radar sparsity and measurement bias. The framework is experimentally validated in an indoor OptiTrack-equipped laboratory using a Skydio 2 drone as the dynamic target. Results demonstrate sub-degree rotational accuracy and decimeter-level translational error (approximately 0.12–0.27 m depending on the metric), with successful generalization to unseen motion trajectories. The findings highlight the method’s applicability for real-world autonomous systems requiring practical, markerless multi-sensor calibration.

  • Research Article
  • 10.1038/s41598-025-24875-y
Evaluating the morality of violence against robots
  • Nov 28, 2025
  • Scientific Reports
  • J Archer + 2 more

The present study explored human moral perceptions of robots and examined how these perceptions vary based on the anthropomorphic features of the agent, the type of harm inflicted, and how the robot reacts to aversive stimuli. An online survey comprised first-year psychology students (N = 234) and participants recruited via social media (N = 63). Participants watched four videos depicting harmful or aversive scenarios containing a humanoid (NAO) or machine-like robot (Roomba). Scenarios included turning the robot off, physically abusing the robot, verbally abusing the robot and socially ostracizing the robot. The robots’ protest behaviors towards the harmful or aversive scenarios were either physical protest, verbal protest, a combination of verbal and physical protests, or no protest at all. The humanoid robot received significantly more moral concern than the machine-like robot in the social ostracism and turn-off scenarios. However, there was no difference in moral concern observed between the humanoid and machine-like robot in the physical and verbal abuse scenarios. Some differences between scenarios were agent dependent. As predicted, both the machine-like and humanoid robot received significantly more moral concern in the physical abuse scenario than in all other scenarios. Finally, despite the hypothesized influence of protest on attributions of moral concern, no significant impact of protest was present. The present study provides a solid foundation for future research exploring the psychological and moral implications of robot mistreatment.

  • Research Article
  • 10.62802/7sgy8455
Hybrid Quantum–Classical Algorithms for High-Precision Sensor Fusion in Robotics Applications
  • Nov 12, 2025
  • Human Computer Interaction
  • Alp Efe Genç

The fusion of quantum computing and robotics represents a frontier in computational intelligence, promising to overcome the limitations of classical sensor fusion methods in precision, scalability, and real-time adaptability. This study proposes a hybrid quantum–classical framework designed to enhance the integration and synchronization of LiDAR, visual, and inertial sensor data, ultimately improving robotic perception and spatial awareness in complex environments. The model employs quantum-assisted optimization techniques to handle high-dimensional uncertainty, noise propagation, and data redundancy challenges inherent in multi-sensor processing. By leveraging variational quantum circuits and classical machine learning optimizers, the hybrid model achieves efficient data correlation and error minimization during sensor alignment. Benchmark experiments were conducted to evaluate the efficiency and precision of the proposed quantum-assisted sensor fusion system relative to conventional data integration algorithms. The findings reveal that hybrid quantum–classical systems yield substantial improvements in localization accuracy, temporal synchronization, and resilience to sensor noise, while maintaining computational feasibility within near-term quantum devices. This work highlights the potential of quantum-enhanced perception frameworks to accelerate the next generation of autonomous robotics, providing a foundation for adaptive control, intelligent navigation, and mission-critical decision-making under uncertainty.

  • Research Article
  • 10.1002/advs.202509928
Artificial Tactile Perception System for Exploring Internal and External Features of Objects via Time-Frequency Features.
  • Nov 11, 2025
  • Advanced science (Weinheim, Baden-Wurttemberg, Germany)
  • Yuanzhi Zhou + 8 more

Perceiving both external and internal object features is essential for accurate recognition and manipulation of humans and robots. However, relying on a single signal processing paradigm may hinder the extraction of specific tactile information from structurally similar signals, posing challenges in both accuracy and computational efficiency in artificial tactile perception systems. Active contact offers a means to modulate the mechanical interaction between tactile systems and objects, enabling targeted perception of specific properties. This work presents a flexible piezoelectric tactile sensing device with active perception capability. It exhibits high force sensitivity (<0.02 N), multi-axis responsiveness, a wide frequency response range (upper than 2000Hz), and high spectral resolution (<1Hz). With two distinct active exploration motions (sliding and vibration), the system extracts edge features through time-domain spike characteristics and infers internal contents via frequency-domain decay trends. The device is integrated on the fingertip of a robotic dexterous hand, demonstrating its effectiveness in tasks such as texture recognition and liquid identification. It also shows promise for handling complex interaction tasks involving multiple subtasks. This study contributes to the advancement of tactile interaction paradigms in robotics and provides a foundation for more intuitive and adaptable robotic perception in human-centered environments.

  • Research Article
  • 10.30574/wjaets.2025.17.1.1402
Quantum Computing and Humanoid Robots: Revolutionizing AI Capabilities
  • Oct 31, 2025
  • World Journal of Advanced Engineering Technology and Sciences
  • Vivek Ghulaxe

This study explores how quantum computing can reshape the intelligence, adaptability, and learning capacity of humanoid robotics. It examines how quantum principles such as superposition and entanglement allow robots to process and evaluate information in parallel, leading to faster, more flexible responses than those built on classical computing. The paper connects ideas from quantum machine learning (QML), quantum optimization, and quantum reinforcement learning (QRL) to practical scenarios in humanoid robotics, where rapid reasoning and context awareness are essential. Within a hybrid quantum-classical framework, the study outlines how these methods can enhance robotic perception, decision-making, and natural-language interaction, making cognitive robotics more adaptive in complex domains such as healthcare, manufacturing, and disaster response. Rather than presenting a full solution, this work defines a pathway for integrating quantum algorithms into real robotic architectures. The results indicate that combining quantum computing with humanoid robotics through hybrid quantum-classical systems could lead to a new stage of robotic intelligence machines able to handle uncertainty, learn continuously, and reason in ways that reflect deeper, more human-like awareness.

  • Research Article
  • Cite Count Icon 1
  • 10.1145/3758104
Gaze Estimation Learning Architecture as Support to Affective, Social and Cognitive Studies in Natural Human–Robot Interaction
  • Oct 28, 2025
  • ACM Transactions on Human-Robot Interaction
  • Maria Lombardi + 3 more

Gaze is a crucial social cue in any interacting scenario and drives many mechanisms of social cognition (joint and shared attention, predicting human intention and coordinating tasks). Gaze is an indication of social and emotional functions affecting the way the emotions are perceived. Evidence shows that embodied humanoid robots endowed with social abilities can be seen as sophisticated stimuli to study several mechanisms of human social cognition while increasing engagement and ecological validity. In this context, building a robotic perception system to automatically estimate the human gaze only relying on robot’s sensors is still demanding. Main goal of the article is to propose a learning robotic architecture estimating the human gaze direction in table-top scenarios without any external hardware. Table-top tasks are largely used in experimental psychology because they are suitable to implement numerous face-to-face collaborative scenarios. Such an architecture can provide a valuable support in studies where external hardware might represent an obstacle to spontaneous human behaviour, especially in environments less controlled than the laboratory (e.g., in clinical settings). A novel dataset was also collected with the humanoid robot iCub, including images annotated from 24 participants in different gaze conditions.

  • Research Article
  • 10.3389/frobt.2025.1693988
Real-time open-vocabulary perception for mobile robots on edge devices: a systematic analysis of the accuracy-latency trade-off
  • Oct 21, 2025
  • Frontiers in Robotics and AI
  • Jongyoon Park + 2 more

The integration of Vision-Language Models (VLMs) into autonomous systems is of growing importance for improving Human-Robot Interaction (HRI), enabling robots to operate within complex and unstructured environments and collaborate with non-expert users. For mobile robots to be effectively deployed in dynamic settings such as domestic or industrial areas, the ability to interpret and execute natural language commands is crucial. However, while VLMs offer powerful zero-shot, open-vocabulary recognition capabilities, their high computational cost presents a significant challenge for real-time performance on resource-constrained edge devices. This study provides a systematic analysis of the trade-offs involved in optimizing a real-time robotic perception pipeline on the NVIDIA Jetson AGX Orin 64GB platform. We investigate the relationship between accuracy and latency by evaluating combinations of two open-vocabulary detection models and two prompt-based segmentation models. Each pipeline is optimized using various precision levels (FP32, FP16, and Best) via NVIDIA TensorRT. We present a quantitative comparison of the mean Intersection over Union (mIoU) and latency for each configuration, offering practical insights and benchmarks for researchers and developers deploying these advanced models on embedded systems.

  • Research Article
  • 10.1002/advs.202516810
Ultra‐Sensitive and Linear Flexible Pressure Sensors with Tri‐Scale Graded Microstructures for Advanced Health Monitoring and Robotic Perception
  • Oct 20, 2025
  • Advanced Science
  • Rui Chen + 6 more

Flexible piezoresistive sensors, which combine high sensitivity and a wide linear detection range, are ideal choices for human health monitoring and robotic perception. However, sensors often exhibit a trade‐off between sensitivity and linearity, with challenges caused by the incompressibility of soft materials and the stiffening of microstructures. In this study, a flexible pressure sensor with a 3D ordered tri‐scale graded microstructure, fabricated by laser processing, is proposed. The sensor achieves an ultra‐high sensitivity of 138.6 kPa−1 and a linear range up to 400 kPa (R2 = 0.99). The compensation behavior derived from the tri‐scale graded microstructure's compression deformation counteracts contact hardening and delays sensitivity saturation. Furthermore, the sensor demonstrates a minimum detectable limit as low as 3 Pa, with response and recovery times of 34/39 ms, showing excellent stability after over 24 000 repeated loading cycles. Physiological monitoring confirms that the sensor can accurately capture a wide range of pressure‐variations, including those from the carotid artery, jugular vein, respiration, throat vibrations, and foot pressure. Additionally, the sensor can be used for remote operation of robotic hands. This work provides a strategy for manufacturing flexible pressure sensors with a combination of high sensitivity, high linearity, and a wide pressure response range.

  • Research Article
  • 10.3390/s25206449
Real-Time Parking Space Detection Based on Deep Learning and Panoramic Images
  • Oct 18, 2025
  • Sensors (Basel, Switzerland)
  • Wu Wei + 5 more

In the domain of automatic parking systems, parking space detection and localization represent fundamental challenges that must be addressed. As a core research focus within the field of intelligent automatic parking, they constitute the essential prerequisite for the realization of fully autonomous parking. Accurate and effective detection of parking spaces is still the core problem that needs to be solved in automatic parking systems. In this study, building upon existing public parking space datasets, a comprehensive panoramic parking space dataset named PSEX (Parking Slot Extended) with complex environmental diversity was constructed by integrating the concept of GAN (Generative Adversarial Network)-based image style transfer. Meanwhile, an improved algorithm based on PP-Yoloe (Paddle-Paddle Yoloe) is used to detect the state (free or occupied) and angle (T-shaped or L-shaped) of the parking space in real-time. For the many and small labels of the parking space, the ResSpp in it is replaced by the ResSimSppf module, the SimSppf structure is introduced at the neck end, and Silu is replaced by Relu in the basic structure of the CBS (Conv-BN-SiLU), and finally an auxiliary detector head is added at the prediction head. Experimental results show that the proposed SimSppf_mepre-Yoloe model achieves an average improvement of 4.5% in mAP50 and 2.95% in mAP50:95 over the baseline PP-Yoloe across various parking space detection tasks. In terms of efficiency, the model maintains comparable inference latency with the baseline, reaching up to 33.7 FPS on the Jetson AGX Xavier platform under TensorRT optimization. And the improved enhancement algorithm can greatly enrich the diversity of parking space data. These results demonstrate that the proposed model achieves a better balance between detection accuracy and real-time performance, making it suitable for deployment in intelligent vehicle and robotic perception systems.

  • Research Article
  • 10.3390/s25206309
An Improved Two-Step Strategy for Accurate Feature Extraction in Weak-Texture Environments
  • Oct 12, 2025
  • Sensors (Basel, Switzerland)
  • Qingjia Lv + 6 more

To address the challenge of feature extraction and reconstruction in weak-texture environments, and to provide data support for environmental perception in mobile robots operating in such environments, a Feature Extraction and Reconstruction in Weak-Texture Environments solution is proposed. The solution enhances environmental features through laser-assisted marking and employs a two-step feature extraction strategy in conjunction with binocular vision. First, an improved SURF algorithm for feature point fast localization method (FLM) based on multi-constraints is proposed to quickly locate the initial positions of feature points. Then, the robust correction method (RCM) for feature points based on light strip grayscale consistency is proposed to calibrate and obtain the precise positions of the feature points. Finally, a sparse 3D (three-dimensional) point cloud is generated through feature matching and reconstruction. At a working distance of 1 m, the spatial modeling achieves an accuracy of ±0.5 mm, a relative error of 2‰, and an effective extraction rate exceeding 97%. While ensuring both efficiency and accuracy, the solution demonstrates strong robustness against interference. It effectively supports robots in performing tasks such as precise positioning, object grasping, and posture adjustment in dynamic, weak-texture environments.

  • Research Article
  • 10.2196/76209
Comparing Caregiver Perceptions of a Social Robot and Tablet for Serious Game Delivery in Dementia Care: Cross-Sectional Comparison Study
  • Oct 7, 2025
  • JMIR Serious Games
  • Dorothy Bai + 4 more

BackgroundSocial robots integrated with serious games hold promise as innovative nonpharmacological strategies in dementia care. However, limited studies have adopted quantitative, platform-level comparisons from the perspective of formal caregivers, who are key stakeholders in technology implementation in dementia care settings.ObjectiveThis study aimed to evaluate the feasibility, usability, and overall user experience of a serious game–based interaction model delivered via a screen-equipped social robot, compared to a tablet-based version of the same model, from the perspective of formal dementia caregivers.MethodsA cross-sectional comparative study was conducted with 120 formal dementia caregivers. Each caregiver individually interacted with both a screen-equipped social robot and a touchscreen tablet, delivering identical serious game content incorporating cognitive exercises, music therapy, and reminiscence. The robot featured multimodal interaction capabilities, including voice, gestures, movement, and facial expression display, while the tablet relied on standard touchscreen functions. Caregivers evaluated both platforms using the User Experience Questionnaire (UEQ), System Usability Scale (SUS), and a customized Technology Acceptance Model (TAM). Group comparisons were performed using t tests, with post hoc Benjamini-Hochberg correction applied to control for multiple comparisons.ResultsCaregivers generally favored the social robot over the tablet. The robot received higher total UEQ scores (mean 1.29, SD 1.14, vs mean 0.99, SD 1.08; P=.004), particularly in enjoyment (P=.002), friendliness (P=.006), clarity (P=.002), organization (P=.02), interest (P=.01), and innovation (P=.002). In the SUS, caregivers rated the robot higher for quick learning (mean 2.71, SD 0.79 vs mean 2.44, SD 0.81; P=.002), while overall SUS scores were comparable. TAM results indicated higher total scores for the robot (mean 4.03, SD 0.47 vs mean 3.67, SD 0.58; P=.002), with stronger ratings in perceived usefulness (P=.002), ease of use (P=.002), attitudes (P=.002), and behavioral intentions (P=.002). All P values are from 2-tailed t tests and were adjusted using the Benjamini–Hochberg procedure.ConclusionsThe social robot used in this study was perceived by formal dementia caregivers as providing a more favorable user experience and eliciting a stronger intention to use compared to a tablet-based platform. These findings support the feasibility of social robots as a platform for delivering technology-supported activities in dementia care and provide a foundation for future research on their implementation and outcomes in dementia care.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers