Abstract
This paper reviews visual system models using event- and frame-based vision sensors. The event-based sensors mimic the retina by recording data only in response to changes in the visual field, thereby optimizing real-time processing and reducing redundancy. In contrast, frame-based sensors capture duplicate data, requiring more processing resources. This research develops a hybrid model that combines both sensor types to enhance efficiency and reduce latency. Through simulations and experiments, this approach addresses limitations in data integration and speed, offering improvements over existing methods. State-of-the-art systems are highlighted, particularly in sensor fusion and real-time processing, where dynamic vision sensor (DVS) technology demonstrates significant potential. The study also discusses current limitations, such as latency and integration challenges, and explores potential solutions that integrate biological and computer vision approaches to improve scene perception. These findings have important implications for vision systems, especially in robotics and autonomous applications that demand real-time processing.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have