Articles published on Single camera
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
3530 Search results
Sort by Recency
- New
- Research Article
- 10.1016/j.jbiomech.2025.113066
- Jan 1, 2026
- Journal of biomechanics
- Sajeda Al-Hammouri + 7 more
Fall prediction algorithm with built-in instability metrics.
- New
- Research Article
- 10.1177/09226028251408783
- Dec 22, 2025
- Restorative neurology and neuroscience
- Rachel L Hawe + 3 more
Clinical assessments of the post-stroke upper limbs have several limitations in that they focus primarily on unilateral movements, rely on observer-based ordinal scales, and give limited insight into movement quality. Human pose estimation uses computer vision to extract motion data from videos, making it a clinically feasible tool to assess movement and overcome many challenges of traditional clinical assessments. Our objective of this work was to demonstrate the use of video-based pose estimation to enhance the assessment of bilateral tasks in individuals post-stroke through visualizations and quantitative metrics. Using single camera video recordings of the Chedoke Hand and Arm Activity Inventory in two individuals with chronic stroke and one neurologically intact individual, we demonstrate differences in movement patterns including increased compensatory movements of proximal joints and asymmetries. We were able to detect differences that the traditional assessment scoring could not, demonstrating the potential of computer vision to enhance clinical assessment.
- Research Article
- 10.1038/s41598-025-26389-z
- Dec 15, 2025
- Scientific Reports
- Seungmin Lee + 8 more
Multiple system atrophy-cerebellar type (MSA-C) is a rapidly progressive neurodegenerative disorder, yet objective digital biomarkers for disease severity remain scarce. This cross-sectional study aimed to identify disease-relevant gait patterns using a 2D video-based gait analysis algorithm and examine their clinical and neuroimaging correlates. Gait features were extracted from videos of patients with MSA-C using Gaitome, and an MSA-C gait pattern score was derived. This score significantly distinguished MSA-C from healthy controls (area under the curve = 0.98) and showed significant correlations with UMSAR part I (r = 0.49, p = 0.0014), part II (r = 0.51, p = 0.0014), MMSE (r = − 0.43, p = 0.012), and MoCA (r = − 0.34, p = 0.049). Tractography revealed significant associations between the gait score and structural connectivity in the middle cerebellar peduncle, cerebellum, and cingulate. Voxel-based morphometry showed that the gait score correlated with gray matter volume in the middle temporal and cerebellar regions, whereas UMSAR part II did not show significant structural associations. These findings suggest that gait patterns extracted from a single video camera can reflect both motor and cognitive severity in MSA-C, and may serve as a practical, non-invasive digital biomarker for disease monitoring.Supplementary InformationThe online version contains supplementary material available at 10.1038/s41598-025-26389-z.
- Research Article
- 10.1038/s41467-025-67148-y
- Dec 14, 2025
- Nature communications
- Yanzhe Wang + 3 more
Integrating human-level haptic perception into soft grippers promises safer robotic grasping and improved human-robot interaction. While visual-tactile sensors promise high perception resolution at low cost, they often sacrifice compliance to maintain optical stability, hindering non-planar contact perception. We present FlexiRay, a soft gripper that integrates visual-tactile sensing with the Fin Ray Effect to achieve high compliance, broad sensory coverage, and multimodal capability. Combining a multi-layered flexible substrate, an optimized multi-mirror optical system, and a decoupled deep learning framework, FlexiRay replicates five of the seven human tactile modalities, including force, contact location, texture, temperature, and proprioception, with a single camera. It achieves 0.17 N force accuracy, 0.96 mm spatial resolution, 0.24 mm proprioception accuracy, and 1.17 °C temperature accuracy while maintaining over 90% effective coverage. FlexiRay empowers compliant grasping, safe collaboration, and intelligent teleoperation, underscoring its potential to propel service robotics toward enhanced intelligence, safety, and real-world utility.
- Research Article
- 10.24867/30sa04varga
- Dec 10, 2025
- Zbornik radova Fakulteta tehničkih nauka u Novom Sadu
- Daria Varga
The aim of this thesis was to create a system capable of estimating the subject’s physical and cognitive fitness based on machine learning. It included analysis of motion detection systems and tools required for implementation. Using a single camera and Google’s MediaPipe, the system evaluates reaction time and jump height, optimized for ease-of-use and low resource requirements.
- Research Article
- 10.9766/kimst.2025.28.6.575
- Dec 5, 2025
- Journal of the Korea Institute of Military Science and Technology
- Jaeho Kim + 3 more
This paper introduces a Blur Components Extraction Model(BCEM) and presents a synthetic image deblurring dataset specialized for maritime environments, Maritime Blur Dataset(MBD). The proposed BCEM extracts blur kernels from unaligned pairs of sharp and blurred images captured with a single camera, without requiring additional hardware or motion sensors. Using the extracted blur kernels, MBD is constructed by convolving them with high-resolution sharp images of maritime scenes that include ships, buoys, and ocean waves-elements rarely considered in terrestrial benchmark datasets. The proposed MBD is used to train deep learning-based image deblurring models, and their performance is evaluated through both qualitative and quantitative comparisons. By efficiently isolating motion blur components such as engine-induced vibrations, the proposed approach allows for the construction of high-quality and realistic deblurring datasets.
- Research Article
- 10.1063/5.0282819
- Dec 1, 2025
- The Review of scientific instruments
- J Moscatelli + 2 more
We introduce a device developed to perform a 3D tracking of passive or active particles under flow, confined in a medium hundreds of micrometers wide. Micro-objects are placed inside a vertical glass capillary, and two mirrors are positioned behind it at a certain angle, making it possible to have the two reflections of the capillary on the same optical plane. A 3D reconstruction of the trajectories, captured with a single camera, is carried out along the vertical axis with micrometer-scale precision. To investigate the interaction between shear, the role of the gravity field, and motile micro-organisms, we track a model puller-type microalgae, Chlamydomonas reinhardtii, under a Poiseuille flow, using first its natural fluorescence and then bright-field imaging. Understanding how confinement influences motility is crucial, and we show that this 3D tracking setup enables a full description of interactions between a motile organism and a solid border.
- Research Article
2
- 10.1016/j.jbiomech.2025.112986
- Dec 1, 2025
- Journal of biomechanics
- Brian Horsak + 6 more
Validity and reliability of monocular 3D markerless gait analysis in simulated pathological gait: A comparative study with OpenCap.
- Research Article
- 10.1109/jbhi.2025.3617825
- Dec 1, 2025
- IEEE journal of biomedical and health informatics
- Lin Liu + 6 more
Cardiovascular diseases are one of the leading causes of death worldwide. Accurately capturing and analyzing the multidimensional dynamics of cardiac motion is crucial for early diagnosis and rehabilitation assessment. This study introduces a novel concept for non-contact cardiac linear vibration (SCG) and rotational components (GCGx and GCGy) decoupling and reconstruction by integrating speckle motion signals captured from two cameras with different defocus levels. The intention is to overcome the motion coupling issues inherent in single-camera imaging and improve the accuracy in characterizing the cardiac complex 3D mechanical behavior. Using a sternum-mounted inertial sensor as the reference, experiments were conducted on 42 subjects in laboratory and intensive care unit settings. The results show that the reconstructed cardiac 3D motion signals exhibit greater waveform similarity to the reference signal than the raw speckle motion signal from a single camera, with similarity indices above 87.471%. In addition, with an 8 ms tolerance error, the localization accuracy of 6 key biomarkers (aortic valve opening/closing (AO/AC), mitral valve opening/closing (MO/MC), the biomarkers corresponding to the AO event in GCGy and the MC event in GCGx) are 73.080%, 99.998%, 85.587%, 86.617%, 99.683% and 77.301%, respectively. These results also outperform those obtained from the raw speckle motion signal. These findings validate the rationale and effectiveness of using dual-camera imaging with different defocus levels to reconstruct SCG, GCGx, and GCGy, offering a promising approach for accurately capturing complex cardiac 3D motion and improving cardiac function assessment.
- Research Article
- 10.1063/5.0287633
- Dec 1, 2025
- Physics of Fluids
- Gene Patrick S Rible + 9 more
In this experimental work, we compare the drop impact behavior on horizontal fiber arrays with circular and wedged fiber cross sections. Non-circular fibers are commonplace in nature, appearing on rain-interfacing structures from animal fur to pine needles. Our arrays of packing densities ≈ 50, 100, and 150 cm−2 are impacted by drops falling at 0.2–1.6 m/s. A previous work has shown that hydrophilic horizontal fiber arrays reduce dynamic drop penetration more than their hydrophobic counterparts. In this work, we show that circularity, like hydrophobicity, increases drop penetration. Despite being more hydrophilic than their non-circular counterparts, our hydrophilic circular fibers promote drop penetration by 26% more than their non-circular counterparts through suppression of lateral spreading and promotion of drop fragmentation within the array. Circular fiber cross sections induce a more circular liquid shape within the fiber array after infiltration. Using conservation of energy, we develop a model that predicts the penetration depth within the fiber array using only measurements from a single external camera above the array. We generalize our model to accommodate fibers of any convex cross-sectional geometry.
- Research Article
- 10.1186/s40462-025-00608-8
- Nov 27, 2025
- Movement Ecology
- Weihao Qi + 11 more
BackgroundQuantification of locomotion is central to the study of animal movement ecology. Although technological advances have enabled researchers to acquire high-resolution kinematic data, the associated methods often require multiple cameras and complicate the analysis process. Quantifying complex animal locomotion in three-dimensional space lacks an accurate, user-friendly method.MethodsBy combining deep learning tools and the pinhole camera model, we develop a novel method for reconstructing three-dimensional animal motion trajectories from monocular videos and analyzing kinematic data. We tested spatial precision and occlusion robustness in both aerial-based and ground-based scenarios. Subsequently, the method was applied to a bat-predation biomechanics study to demonstrate its capabilities. The application is based on low-cost single camera and does not require multiple devices or precise calibration.ResultsOur method rapidly reconstructs 3D trajectories for various animal movements, including flight, walking, and preying. The estimated 3D coordinates have an average bias of 0.09 m for aerial motion and 0.044 m for ground motion. Moreover, our method is extremely robust in distance estimation when faced with foreground occlusion. We extracted kinematic parameters from the 3D trajectory and gait frequencies from pixel area changes. Applying these parameters to biomechanical analysis, the results show that the obtained parameters can accurately describe the animal’s movement.ConclusionsThis lightweight and cost-effective approach allows the analysis of animal locomotion in the natural environment. It also allows researchers to flexibly adapt it to their specific needs, facilitating intelligent monitoring of the wild animals and enhancing the understanding of their locomotion data.Supplementary informationThe online version contains supplementary material available at 10.1186/s40462-025-00608-8.
- Research Article
- 10.1117/1.apn.5.1.016001
- Nov 26, 2025
- Advanced Photonics Nexus
- Raviv Ilani + 1 more
4D event imaging with a single neuromorphic camera
- Research Article
- 10.1016/j.ohx.2025.e00723
- Nov 24, 2025
- HardwareX
- Abinash Sahoo + 2 more
Low-cost open-source camera view splitter (quadscope) for flow diagnostics
- Research Article
- 10.1149/ma2025-02542581mtgabs
- Nov 24, 2025
- Electrochemical Society Meeting Abstracts
- Tihana Stefanic + 2 more
Because recycling processes have yet to be developed to a point where they are economically and environmentally viable, a feasible solution to their growing popularity is extending the useful lifetime and performance of Li-ion cells. This research uses experimentally parametrized physics-based modelling to understand how coupling among the charge, strain, and thermal history of a battery impacts its performance.Our group has had previous successes using an extended Newman–Tobias model to predict the thermal and electrochemical behavior of large pouch cells [1] [2]. Predictions of the cell-level thermal response to a high C-rate lock-in thermography experiment, parametrized using inverse modelling of the same cell, are shown in Figure 1 (right).Cell-level mechanical effects on performance are less well understood, partially because experimental data are more limited. Quantifying thermomechanical and electromechanical effects is particularly important for ensuring safety limits for fast-charging and cell design. Having physical models to account for swelling could assist us in designing stacks that minimize swelling and in turn reduce fatigue and extend battery life. The main challenge with implementing mechanical effects into modelling, however, is that detailed, well-controlled experiments are needed both to parametrize simulations and validate models. To that end, we will report preliminary experimental studies of the coupled cell-level mechanical and thermal response of large-format pouch cells, as shown in Figure 1 (left).Mechanical stress and strain are believed to serve as good indicators of a battery’s state of health [3]. We will report in-situ displacement measurements at varying C-rates and under different modes of thermal control. The main focus will be to parse swelling into contributions from thermal expansion and state of charge. Previous cell-level experiments have suggested that swelling depends on local temperature almost as strongly as it does on the charge state. Figure 1. Left: Single infra-red camera shot of a 20 Ah A123 LFP cell with active battery area and ambient spot highlighted in black and red respectively, using the experimental setup proposed by Chu et al. [1]; Right: Simulated thermogram of a 20 Ah A123 LFP cell, parametrized using the method proposed by Chu et al. [1].[1] Chu, H. N., Kim, S. U., Rahimian, S. K., Siegel, J. B., & Monroe, C. W. (2020). Parameterization of prismatic lithium–iron–phosphate cells through a streamlined thermal/electrochemical model. Journal of Power Sources[2] Lin, J., Chu, H. N., Howey, D. A., & Monroe, C. W. (2022). Multiscale coupling of surface temperature with solid diffusion in large lithium-ion pouch cells. Communications Engineering, 1(1), 1.[3] Oh, K. Y., Siegel, J. B., Monroe C.W. & Stefanopoulou, A. (2014). Rate dependence of swelling in lithium-ion cells. Journal of Power Sources, 267, 197-202. Figure 1
- Research Article
- 10.3390/s25226858
- Nov 10, 2025
- Sensors (Basel, Switzerland)
- Jairo José Muñoz Chávez + 5 more
This research presents a novel, low-cost optical acquisition system based on infrared imaging for real-time weld bead geometry monitoring in Gas Metal Arc Welding (GMAW). The system uniquely employs a commercial CCD camera (1000–1150 nm) with tailored filters and lenses to isolate molten pool thermal radiation while mitigating arc interference. A single camera and a mirror-based setup simultaneously capture weld bead width and reinforcement. Acquired images are processed in real time (10 ms intervals) using MATLAB R2016b algorithms for edge segmentation and geometric parameter extraction. Dimensional accuracy under different welding parameters was ensured through camera calibration modeling. Validation across 35 experimental trials (over 6000 datapoints) using laser profilometry and manual measurements showed errors below 1%. The resulting dataset successfully trained a Support Vector Machine, highlighting the system’s potential for smart manufacturing and predictive modeling. This study demonstrates the viability of high-precision, low-cost weld monitoring for enhanced real-time control and automation in welding applications.
- Research Article
- 10.1177/17298806251404171
- Nov 1, 2025
- International Journal of Advanced Robotic Systems
- Shangwei Yang + 2 more
The rapidly increasing number of electric ships in use worldwide necessitates the development of fast, secure, and autonomous shore-based charging systems that can meet the unique conditions of shipping, including the limited time of ships at dock owing to set travel schedules and the highly dynamic operating conditions of marine environments. However, existing marine charging systems remain insufficiently reliable and efficient. The present work addresses this issue by proposing an innovative vision-controlled automatic robotic charging system composed of a robotic charging station and an auxiliary alignment platform. First, the visual data captured by a single camera on the charging station is applied in conjunction with the You Only Look Once YOLO11 object detection model to identify circular targets on the alignment platform with a rough degree of accuracy. Then, the precise target coordinates are obtained from detailed edge features extracted from the rough target image using Canny-Zernike and least-squares algorithms, and the target coordinates are finally located within the camera coordinate system based on the target pose calculated by the Infinitesimal Plane-Based Pose Estimation algorithm. Finally, the charging process is commenced after transmitting the precise target coordinates to the robotic arm of the charging station and the plug and socket are connected. The effectiveness and accuracy of the proposed charging system are demonstrated based on the results of full-scale real-world experiments with a prototype automated charging system, where the system provides a displacement error within 0.8 mm from the precise target position, with an angular positioning error within 0.7°, which is sufficient for meeting the accuracy requirements of charging system connections in practical engineering applications.
- Research Article
2
- 10.1016/j.prosdent.2025.07.013
- Nov 1, 2025
- The Journal of prosthetic dentistry
- Nurşen Şahin + 2 more
Evaluation of color matching accuracy using artificial intelligence applications and a spectrophotometer: A photometric analysis.
- Research Article
2
- 10.1126/science.adz1705
- Oct 30, 2025
- Science (New York, N.Y.)
- Soumaya Latour + 6 more
We present a direct measurement of the slip-rate function from a natural coseismic rupture, recorded on 28 March 2025, during the moment magnitude (Mw) 7.7 Mandalay earthquake (Myanmar). This measurement was made using video footage of the surface rupture captured by a closed-circuit television (CCTV) security camera located only meters away from the fault trace. Using direct image analysis, we measured the relative slip at each time step and deduced the slip rate. Our results show a local slip duration of 1.4 seconds and a cumulative slip of ~3 meters, during which surface slip velocity peaked at ~3.5 meters per second with passage of the rupture front. These findings demonstrate the pulse-like nature of the seismic rupture at the location of the recording. Using slip-pulse elastodynamic rupture models, we obtained the complete mechanical properties of this pulse, including the energy release rate.
- Research Article
- 10.36001/phmconf.2025.v17i1.4371
- Oct 26, 2025
- Annual Conference of the PHM Society
- Tarek Yahia + 5 more
The demand for work safety protection in Human-Robot Interaction (HRI) work cells is rapidly increasing, driven by the projected 34.3% Compound Annual Growth Rate (CAGR) of the global Collaborative Robot (Cobot) market from 2020 to 2030 [1]. According to IRF-World Robotics 2023, it is reported that there are nearly 4 million industrial robots in operation worldwide, with approximately 10% of them being cobot [2]. A NIOSH report highlighted 61 robot-related fatalities between 1992 and 2015, with an expectation of further rising due to the increasing use of industrial robots and cobots in the US work environment [3]. A recent study in [4] delved into 355 robot accidents documented by KOSHA between 2009 and 2019, revealing that 95% occurred in manufacturing businesses. Pinch and crush incidents accounted for 52% of the accidents, while impacts and collisions accounted for 36%, and the remaining 12% involved falls, flying objects, trips/slips, cuts, burns, etc. These findings align with US data reported in [5].The rising integration of cobot units among major manufacturers emphasizes the critical need for enhancing cobot safety in manufacturing. Owing to safety considerations and regulatory requirements, existing cobots frequently operate at significantly reduced speeds and are restricted from undertaking complex interaction tasks in shared workspace. This limitation has curtailed the full potential utilization and productivity of cobots in manufacturing. This paper introduces a novel 3D sensing framework designed to address these limitations by enabling safety assurance in workspaces requiring close human-robot interaction. The framework generates 3D human pose information and relays it to the robot for real-time safety monitoring. Our methodology begins with data collection from a single RGB-D camera capturing human-robot interactions in a manufacturing environment. Human shape and pose are predicted using deep neural networks, which then incorporate depth information and undergo 3D geometric transformations to deduce size, shape, and translation. This process produces a reconstructed 3D avatar with pose, size, and location. Following 3D human posture estimation, this data is then integrated into a virtual environment with a real robot for real-time monitoring. Results demonstrate successful reconstruction of 3D human geometry within human-robot collaboration settings. By integrating both the reconstructed mesh and real-time robot state into a unified virtual environment, we achieved real-time, offline, continuous monitoring of the critical distance between robot and human throughout operation. These distance measurements provide crucial data for developing collision detection, prediction, and avoidance capabilities when incorporated into the robot control feedback loop.
- Research Article
- 10.3390/math13203330
- Oct 19, 2025
- Mathematics
- Erick P Herrera-Granda + 6 more
In recent years, SLAM, visual odometry, and structure-from-motion approaches have widely addressed the problems of 3D reconstruction and ego-motion estimation. Of the many input modalities that can be used to solve these ill-posed problems, the pure visual alternative using a single monocular RGB camera has attracted the attention of multiple researchers due to its low cost and widespread availability in handheld devices. One of the best proposals currently available is the Direct Sparse Odometry (DSO) system, which has demonstrated the ability to accurately recover trajectories and depth maps using monocular sequences as the only source of information. Given the impressive advances in single-image depth estimation using neural networks, this work proposes an extension of the DSO system, named DeepDSO. DeepDSO effectively integrates the state-of-the-art NeW CRF neural network as a depth estimation module, providing depth prior information for each candidate point. This reduces the point search interval over the epipolar line. This integration improves the DSO algorithm’s depth point initialization and allows each proposed point to converge faster to its true depth. Experimentation carried out in the TUM-Mono dataset demonstrated that adding the neural network depth estimation module to the DSO pipeline significantly reduced rotation, translation, scale, start-segment alignment, end-segment alignment, and RMSE errors.