Correction of Dynamic Guidance Deviations in Laser Projections for Museum and Multimedia Installations

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Ensuring the accuracy of laser beam guidance is a key condition for the quality of modern museum and multimedia installations. Even minor dynamic deviations caused by vibrations or thermal deformations lead to a loss of projection clarity and a decrease in the immersion effect. The purpose of the article is to develop and test an embedded software system to compensate for dynamic errors in laser projection guidance in real time for museum and multimedia installations. The research methodology is mathematical modelling of dynamic deviations, computer vision algorithms for detecting and tracking laser marks, sensor fusion of data from inertial measurement units (IMUs), as well as modular software implementation on single-board computers running Linux. The work uses system analysis to evaluate existing approaches, experimental testing to verify the performance of algorithms, and comparative tests with classical stabilisation methods. The novelty of the research lies in the creation of an affordable and resource-saving system that combines CMOS sensors, light spot detection algorithms and quaternion integration of IMU data. Such an architecture enables the processing of streaming video at a frequency of approximately 90 frames/s, with low hardware requirements, allowing for the high-quality compensation of errors without the need for expensive opto-mechanical equipment. The conclusion of the research. The article identifies the main problems of dynamic stabilisation of laser projections, analyses modern hardware and software solutions, and develops and tests the author’s built-in correction system. The results obtained confirm that the proposed approach enables the enhancement of accuracy and stability in projections for museum and multimedia applications, offering a combination of cost-effectiveness, scalability, and technical reliability.

Similar Papers
  • Research Article
  • Cite Count Icon 3
  • 10.1097/scs.0000000000002970
Correction of Residual Static and Dynamic Labial Deviations in a Paralyzed Face After Free Gracilis Muscle Transplantation.
  • Nov 1, 2016
  • The Journal of craniofacial surgery
  • Ricardo Horta + 4 more

Free muscle flap transfer is currently the procedure of choice for longstanding facial paralysis to restore symmetry both at rest and when smiling. However, movements obtained are generally localized, unidirectional, and philtrum centralization and lower lip movement is not proportionally achieved. The stability of free flap insertion at the lips also interferes with the results, as gradual disinsertion and shifting of the nasolabial fold can be caused by repetitive movements. Asymmetry of smile can also be caused by lip depressor inactivity due to marginal mandibular paralysis, and both dynamic and static procedures are often required after dynamic reanimation. Here, the authors report a technical refinement that can be used even years after facial reanimation, using concealed scars and with minimal morbidity for correction of static and dynamic labial deviations from the midline. Placement of a transfixed tendon graft in C-fashion tendon graft between the gracilis free flap and the orbicularis oris of the upper and lower lip on the nonparalyzed side allows the forces from muscle contraction to be transferred to the philtrum and lower lip. It allows correction of static and dynamic labial deviations from the midline, reducing rates of inadequate fixation and partial or total disinsertion of the muscle flap in the buccal region.

  • Conference Article
  • Cite Count Icon 4
  • 10.1145/3407982.3407990
Display of Computer-Generated Vector Data by a Laser Projector
  • Jun 19, 2020
  • Svetozar Ilchev + 2 more

The paper proposes a new approach for the display of computer-generated vector data by a laser projector that will benefit applications in the fields of advertising, marketing and manufacturing. Among the advantages of laser projections are the capability for a swift change of the projected content and the high contrast and brightness of the presentation. They offer a flexible way to attract human attention with a high degree of success.The paper presents the design and development of the software and hardware building blocks of our prototype laser-based projection system. First, the overall conceptual design of the system is presented and the strategy for the adaptation of the vector data for laser projection is explained. Then, some of the implementation details of both the software and hardware of the system are described. The challenges that we encountered are discussed, some experimental results are presented and the next steps for the successful practical application of the system are outlined.

  • Research Article
  • 10.1088/1757-899x/1031/1/012040
Software for laser projection of CAD files for the clothing industry
  • Jan 1, 2021
  • IOP Conference Series: Materials Science and Engineering
  • S Ilchev + 3 more

The software for laser projection of CAD files is a piece of a larger solution for laser projection of CAD data at the workbench of human operators. The software was originally intended for the clothing industry but it is suitable for other industries working with vector-based data, as well. Among our goals during the software development was the support of CAD data from various design systems used in the manufacturing industry and the design of tools for data modification to make the data suitable for laser projection. The projection is done by a laser projector that contains one or more semiconductor laser diodes and at least two rapidly rotating mirrors called scanners, which deflect the laser beam and project it on a two-dimensional surface. This principle of operation requires the modification of the CAD data, which is usually intended for use on a CNC router, plotter or another similar machine. Our software was developed to enable this modification through suitable software operations and to export the CAD data to a format supported by laser projectors. The experimental results show that we achieve our goals and that the generation of high-quality laser projections is enabled by our software.

  • Conference Article
  • Cite Count Icon 20
  • 10.1109/plans.2014.6851404
An exploration of low-cost sensor and vehicle model Solutions for ground vehicle navigation
  • May 1, 2014
  • Daniel C Salmon + 1 more

This paper discusses an exploratory analyses of the benefits of using Vehicle Odometry/Steer Angle and an accurate vehicle model (VM) to replace/assist a low-cost Inertial Measurement Unit (IMU) for blended ground vehicle navigation. In this research, multiple variations of the tightly coupled Extended Kalman Filter (EKF) algorithm are performed using multiple sensor sets to find the optimal solution, factoring in sensor cost and pose accuracy. Many automotive precision navigation solutions have been developed based on sensor fusion in recent years; however, as autonomous navigation technology becomes more prevalent on consumer vehicles, the need for a high-accuracy, low-cost pose solution is increasing. One widely used solution to this problem is the combination of a Micro-Electro-Mechanical (MEMS) IMU with Global Positioning System (GPS); however, this may not be the optimal solution due to the high noise characteristics of lower cost IMU's. Measurements from GPS, IMU/Inertial Navigation System (INS), and VM are used in this research. The different algorithm setups being investigated include: GPS/VM sensor fusion with accurate vehicle model constraints, GPS/INS with low-cost commercially available IMU, and GPS/INS/VM with the IMU. The determination of the level of IMU necessary for GPS/INS fusion to exceed the pose solution accuracy achievable using GPS/VM sensor fusion with accurate vehicle model constraints is a priority for this research. Another goal of this research is the quantitative and qualitative analysis of the benefits of using VM to assist normal GPS/INS EKF and whether the inclusion of VM in either the time update or the measurement update results in a more accurate pose solution. Direct experimental comparison of tightly coupled EKF Fault Detection and Exclusion (FDE) algorithms based on vehicle wheel speed and steering angle versus the IMU measurements to determine if either sensor set yields a distinct advantage over the other is also investigated. All analysis will be based on real world experimental data.

  • Conference Article
  • Cite Count Icon 35
  • 10.1049/cp.2014.0527
Fusing Kinect Sensor and Inertial Sensors with Multi-rate Kalman Filter
  • Jan 1, 2014
  • Shimin Feng + 1 more

This paper presents a sensor fusion approach to fusing Microsoft Kinect sensor and the built-in inertial sensors in a mobile device. A multi-rate Kalman filter is designed and applied for fusing the low-sampling-rate (30Hz) uncertain positions sensed by the Kinect sensor and the high-sampling-rate (90Hz) accelerations measured by the inertial sensors. These sensors have complementary properties. The Kinect can be applied for skeleton tracking, which gives the joints' positions. Meanwhile, the built-in inertial sensors in the mobile device sense the hand motion and the acceleration can be estimated through inertial sensor fusion. Firstly, convert the acceleration estimated with inertial sensors from the body frame into the Kinect coordinate system. Experimental results show that the hand accelerations estimated with the Kinect sensor and the inertial sensors are comparable. Secondly, design and apply a multi-rate Kalman filter for sensor fusion. The sensor fusion helps improve the accuracy of the system state estimation including the position, the velocity and the acceleration. This is of great benefit for combining inertial sensors and the external position sensing device for indoor augmented reality (AR) and other location-aware sensing applications.

  • Conference Article
  • Cite Count Icon 61
  • 10.1109/plans.2008.4569999
Modeling and bounding low cost inertial sensor errors
  • Jan 1, 2008
  • Zhiqiang Xing + 1 more

This paper presents a methodology for developing models for the post-calibration residual errors of inexpensive inertial sensors in the class normally referred to as “automotive” or “consumer” grade. These sensors are increasingly being used in real-time vehicle navigation and guidance systems. However, manufacturer supplied specification sheets for these sensors seldom provide enough detail to allow constructing the type of error models required for analyzing the performance or assessing the risk associated with navigation and guidance systems. A methodology for generating error models that are accurate and usable in navigation and guidance systems’ sensor fusion and risk analysis algorithms is developed and validated. Use of the error models is demonstrated by a simulation in which the performance of an automotive navigation and guidance system is analyzed.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/uralcon52005.2021.9559593
Study of the Effectiveness of the Introduction of Laser Projection System in the Process of Technological Preparation of the Production of Aircraft Structures From Polymer Composite Materials
  • Sep 24, 2021
  • Oleg S Dolgov + 2 more

The results of an applied study of the effectiveness of laser technologies in the automation of technological preparation of serial production of aircraft structures from polymer composite materials are presented. The article also discusses reducing the loss of expensive composite materials through the use of precise sweeping and cutting machines with CNC and reducing time losses by increasing the speed and improving the quality of manual stacking of composite material by means of using precise blanks and laser projections of the stacking areas. Particular importance in the article is paid to the use of laser technologies in the processes of automation of technological preparation of batch production to achieve a high level of repeatability of products, which will significantly reduce production defects due to the influence of the human factor on the production process.

  • Supplementary Content
  • Cite Count Icon 10
  • 10.5075/epfl-thesis-3192
3D position tracking for all-terrain robots
  • Jan 1, 2005
  • Infoscience (Ecole Polytechnique Fédérale de Lausanne)
  • Pierre Lamon

3D position tracking for all-terrain robots

  • Book Chapter
  • Cite Count Icon 5
  • 10.1007/978-3-642-24031-7_21
Fuzzy Logic Based Sensor Fusion for Accurate Tracking
  • Jan 1, 2011
  • Ujwal Koneru + 2 more

Accuracy and tracking update rates play a vital role in determining the quality of Augmented Reality(AR) and Virtual Reality(VR) applications. Applications like soldier training, gaming, simulations & virtual conferencing need a high accuracy tracking with update frequency above 20Hz for an immersible experience of reality. Current research techniques combine more than one sensor like camera, infrared, magnetometers and Inertial Measurement Units (IMU) to achieve this goal. In this paper, we develop and validate a novel algorithm for accurate positioning and tracking using inertial and vision-based sensing techniques. The inertial sensing utilizes accelerometers and gyroscopes to measure rates and accelerations in the body fixed frame and computes orientations and positions via integration. The vision-based sensing uses camera and image processing techniques to compute the position and orientation. The sensor fusion algorithm proposed in this work uses the complementary characteristics of these two independent systems to compute an accurate tracking solution and minimizes the error due to sensor noise, drift and different update rates of camera and IMU. The algorithm is computationally efficient, implemented on a low cost hardware and is capable of an update rate up to 100 Hz. The position and orientation accuracy of the sensor fusion is within 6mm & 1.5°. By using the fuzzy rule sets and adaptive filtering of data, we reduce the computational requirement less than the conventional methods (such as Kalman filtering). We have compared the accuracy of this sensor fusion algorithm with a commercial infrared tracking system. It can be noted that outcome accuracy of this COTS IMU and camera sensor fusion approach is as good as the commercial tracking system at a fraction of the cost.

  • Research Article
  • Cite Count Icon 18
  • 10.1109/tla.2013.6601744
Sensor Fusion with Low-Grade Inertial Sensors and Odometer to Estimate Geodetic Coordinates in Environments without GPS Signal
  • Jun 1, 2013
  • IEEE Latin America Transactions
  • Douglas Daniel Sampaio Santana + 2 more

This paper presents a sensor fusion algorithm based on a Kalman Filter to estimate geodetic coordinates and reconstruct a car test trajectory in environments where there is no GPS signal. The sensor fusion algorithm is based on low-grade strapdown inertial sensors (i.e. accelerometers and gyroscopes) and an incremental odometer, from which, velocity measurements is obtained. Since the dynamic system is non linear, an Extended Kalman Filter (EKF) is used to estimate the states (i.e. latitude, longitude and altitude) and reconstruct the test trajectory. The relevance of this work is given by the fact that, in the current literature, much has been published about the merger Inertial Sensors and GPS, however, currently no literature that addresses the form of sensor fusion proposed here is available. Another aspect that could be emphasized is that the proposed algorithm has potential to be applied in environments where GPS signals are not available, such as Pipeline Inspection Gauge (PIG) as depicted below in figure 2. The inertial navigation system developed and tested, shows that only with inertial sensors measurements, a closed tested trajectory can not be reconstructed satisfactorily, however when it uses the sensor fusion, the trajectory can be reconstructed with relative success. On preliminary experiments, it was possible reconstruct a closed trajectory of approximately 2800m, attaining a final error of 13m.

  • Supplementary Content
  • 10.48550/arxiv.2111.14355
Optimal Sensor Fusion Method for Active Vibration Isolation Systems in Ground-Based Gravitational-Wave Detectors
  • Nov 29, 2021
  • arXiv (Cornell University)
  • T Tsang + 3 more

Sensor fusion is a technique used to combine sensors with different noise characteristics into a super sensor that has superior noise performance. To achieve sensor fusion, complementary filters are used in current gravitational-wave detectors to combine relative displacement sensors and inertial sensors for active seismic isolation. Complementary filters are a set of digital filters, which have transfer functions that are summed to unity. Currently, complementary filters are shaped and tuned manually rather than optimized, which can be suboptimal and hard to reproduce for future detectors. In this paper, an optimization-based method called $\mathcal{H}_\infty$ synthesis is proposed for synthesizing optimal complementary filters according to the sensor noises themselves. The complementary filter design problem is converted into an optimization problem that seeks minimization of an objective function equivalent to the maximum difference between the super sensor noise and the lower bound in logarithmic scale. The method is exemplified by synthesizing complementary filters for sensor fusion of 1) a relative displacement sensor and an inertial sensor, 2) a relative displacement sensor coupled with seismic noise and an inertial sensor, and 3) hypothetical displacement sensor and inertial sensor, which have slightly different noise characteristics compared to the typical ones. In all cases, the method produces complementary filters that suppress the super sensor noise equally close to the lower bound at all frequencies in logarithmic scale. The synthesized filters contain features that better suppress the sensor noises compared to the pre-designed complementary filters. Overall, the proposed method allows the synthesis of optimal complementary filters according to the sensor noises themselves and is a better and versatile method for solving sensor fusion problems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.3390/s140915641
Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction
  • Aug 25, 2014
  • Sensors (Basel, Switzerland)
  • Shengli Zhou + 4 more

The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.

  • Research Article
  • Cite Count Icon 5
  • 10.1109/access.2020.3032013
Visual-Inertial Fusion Based Positioning Systems
  • Jan 1, 2020
  • IEEE Access
  • Jianan Zhang + 1 more

In this paper, we developed a visible light positioning (VLP) system using a camera and low-cost inertial measurement units (IMUs). Applying computer vision and sensor fusion techniques, our VLP system is able to estimate the angle of arrival (AoA) and the distance from a landmark to a mobile device. Due to the complementary nature between IMUs and cameras, we are able to improve the performance of VLP systems by applying sensor fusion. Currently, most optical positioning systems require at least two line-of-sight (LOS) links, so the coverage is not always satisfactory. Using a single round light-emitting diode (LED) panel or two coplanar black thick rings as the landmark, our VLP system only needs one LOS link to estimate the orientation and position of the mobile device. By activating inertial navigation, our VLP system is able to perform localization even if the landmark is temporarily blocked by obstacles. We derived approximated upper bounds of the angular errors and applied visual-inertial sensor fusion in estimating the Euler angles of the mobile device. Since the weights of sensor fusion are determined by upper bounds, the expected maximum errors are minimized in our positioning system. In our field experiments, the positioning system has an average positioning error of 0.18m with an effective positioning range of 7m. Compared to similar types of positioning systems, our system has significant improvements in positioning range without sacrificing positioning accuracy.

  • Conference Article
  • Cite Count Icon 15
  • 10.1109/sisy.2018.8524610
A Real-Time Pose Estimation Algorithm Based on FPGA and Sensor Fusion
  • Sep 1, 2018
  • Laszlo Schaffer + 2 more

Combining measurements of different sensors are a crucial step to achieve better precision in pose estimation. Sensor fusion is an effective state estimation method (in this case Kalman filter), which is used in several disciplines. Using sensor fusion, the information from the sensors and the characteristics of each sensor can be used together to improve the estimate and decrease the uncertainty of the measured variables. In this paper a real-time pose estimation algorithm using sensor fusion of visual odometry (optical flow), Inertial Measurement Unit (IMU) and Global Positioning System (GPS) measurements is presented. The IMU contains calibrated three degrees of freedom (3Dof) accelerometer and an also 3DoF gyroscope. A Kalman filter is used for the fusion of the measurements of the different sensors. The algorithm is implemented in MATLAB and on a low-cost Z-7010 Field-Programmable Gate Array (FPGA) using the ZYBO development board, which is capable of real-time pose estimation with sensor fusion.

  • Research Article
  • Cite Count Icon 17
  • 10.1109/tim.2022.3188509
Sensor Fusion Based on Embedded Measurements for Real-Time Three-DOF Orientation Motion Estimation of a Weight-Compensated Spherical Motor
  • Jan 1, 2022
  • IEEE Transactions on Instrumentation and Measurement
  • Min Li + 2 more

This paper presents a sensor fusion method to estimate the three degrees-of-freedom (3-DOF) orientation of a ball-joint-like permanent-magnet spherical motor (PMSM) and its angular velocity using embedded sensors that simultaneously measure the existing magnetic flux density (MFD) field and the back-electromotive-force (back-emf), which serve as inputs to a Kalman filter (KF) based sensor fusion system for full state estimation of 3-DOF angular displacement and velocity in real-time. Formulated in quaternion representation, the effectiveness and accuracy of the sensor fusion system consisting of an artificial neural network (ANN) that determines the 3-DOF orientation from measured MFD and an emf-velocity model, has been experimentally evaluated on an additive manufactured prototype PMSM- Weight Compensating Regulator (WCR) by comparing the estimated orientation and angular-velocity with that measured by two most used methods, optical laser-beam system, and inertial measurement unit (IMU). The experimental findings demonstrate that the KF-based sensor fusion effectively overcomes the MFD sensor noise and IMU drift problems and is capable of simultaneous measurements of 3-DOF angular displacement and velocity with improved accuracy relative to the popular IMU measurements.

Save Icon
Up Arrow
Open/Close