Abstract

In recent years, video and image analysis tools have been increasingly employed in many real-time applications; these include lane and car recognition for intelligent transportation systems, human object segmentation and tracking for intelligent video surveillance systems, and face detection and image indexing for digital still cameras and camcorders. To implement these analysis tools in real-time applications, new computing architectures such as reconfigurable architectures, application-specific instruction-set processors, stream processing architectures, and dedicated processing elements have been developed to handle more complex real-time content analysis. It is often necessary to integrate specially designed hardware accelerators with other processors to achieve a high processing speed. Furthermore, new algorithms that are suitable for hardware design or implementation on existing architectures play important roles in such systems. The purpose of this special issue is to report on new hardware design ideas to support these video and image analysis tools. This special issue contains two parts. The first part describes computing platform design, including vision processors and memory sub-system design. The second part describes several design case studies of image and video analysis systems, including machine learning engines and video segmentation engines. The first part begins with two general-purpose vision processors with a single-instruction-multiple-data (SIMD) architecture that have been developed in the industry. These two new-generation designs both feature enhanced capabilities for processing higher-level video analysis tasks with different schemes. In “IMAPCAR: A 100 GOPS In-vehicle Vision Processor Based on 128 Ring Connected 4-Way VLIW Processing Elements,” Kyo and Okazaki design an in-vehicle vision processor with 128 8-bit four-way verylong-instruction-word (VLIW) RISC processing element (PE) array architecture. As compared to their previous design, IMAP-CE, the new design realizes 2.5 times better performance via the improved video I/O flexibility and data remapping structure, addition of one MAC unit per PE, and increased reliability of memory structure. In “Xetal-II: A Low-Power Massively-Parallel Processor for Video Scene Analysis,” Abbo, Kleihorst, and Schueler design a 140 GOPS image processor with a massively parallel SIMD (MP-SIMD) architecture comprising 320 PEs arranged as a linear processor array. To support regionbased processing, it provides a low-cost look-up table (LUT) and flag aggregation and flag-based result selection. In addition to computation engines, the design of memory sub-systems also plays an important role in a video and image analysis system. In “Streaming Data Movement for Real-Time Image Analysis,” Lopez-Lagunas and Chai propose the notion of stream descriptors as a means to define image stream access patterns and to improve memory access efficiencies by discovering the locality between different data streams. Examples are provided with a Reconfigurable Streaming Vector Processor (RSVP), and the design concept can be widely applied on different computing platforms such as ASIC and reconfigurable hardware. The second part describes four case studies that target different computing platforms: FPGA, multi-core processor, reconfigurable processor, and hybrid computing platform. S.-Y. Chien (*) National Taiwan University, Taipei, Taiwan e-mail: sychien@cc.ee.ntu.edu.tw

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.