Abstract

To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.

Highlights

  • When multiple vision sensors are used to acquire images of a scene from multiple points of view to achieve, for example, cooperative tracking, wide area monitoring or 3D motion measurement, the image sequences given by the sensors should be synchronized

  • Internal functions of the vision sensor, including the analog photo integration process in the imager and digital computation executed outside of the imager, forms a phase-locked loop (PLL) to regulate the output signal, which corresponds to the vision frame timing, so that the output is synchronized with the reference

  • The larger the frequency discrepancy was, the larger the observed steady-state phase error from the π/2 shift was. This is explained by the fact that non-zero time correlation must be produced to fix the vision frame rate away from its central value

Read more

Summary

Introduction

When multiple vision sensors are used to acquire images of a scene from multiple points of view to achieve, for example, cooperative tracking, wide area monitoring or 3D motion measurement, the image sequences given by the sensors should be synchronized. Synchronization of image sequences in general involves two concepts: One is to produce temporally-aligned vision frames in image acquisition, and the other is to establish correct correspondence between the vision frames This paper is concerned with the former, and aims at proposing a novel synchronization technique that can be used even for low-cost wireless vision sensor networks. Synchronization of vision sensors is a critical demand in some applications in the fields of, in particular, industrial or scientific measurement. Virtual synchronization, e.g., [1], in which interpolation and prediction between the frames of unsynchronized cameras are used, can be an alternative, but real synchronization is apparently advantageous when, for example, the motion of target objects is fast and random, and/or highly precise position information is required

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.