Abstract
The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for Foreman and Car-phone video sequences, highlight the feasibility of the approach, which provides a frame rate of about 26 frames/s. Comparisons with existing CNN-based methods show that the conceived approach is more accurate, thus representing a good trade-off between real-time requirements and accuracy.
Highlights
Due to the recent advances in communication technologies, the interest in video contents has increased significantly, and it has become more and more important to automatically analyze and understand video contents using computer vision techniques
With the recent advances in several new multimedia applications, there is the need to develop segmentation algorithms running on efficient hardware platforms [16,17,18]
In [16] an algorithm for the real-time segmentation of endoscopic images running on a special-purpose hardware architecture is described
Summary
Due to the recent advances in communication technologies, the interest in video contents has increased significantly, and it has become more and more important to automatically analyze and understand video contents using computer vision techniques. Referring to the development of segmentation algorithms running on hardware platforms, in this article the attention is focused on the implementation of algorithms running on the Cellular Neural/Nonlinear Network (CNN) Universal Machine [5,6,7]. This architecture offers great computational capabilities, which are suitable for complex image-analysis operations in objectoriented approaches [8,9,10]. After this preliminary selection of edge candidates, the second block enables accurate edge detection to be obtained, using a technique able to highlight the discontinuity areas
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.