Abstract
The overarching goal of the pattern recognition community consists of presenting hypotheses to describe classes of objects using mathematical models, processing the information to eliminate the presence of noise, and selecting the model that best explains the given observations; nevertheless, it does not prioritize in memory and time complexity when matching models to observations. Given that we describe, explain and manipulate these objects through the perceptual system, there is an increasing need to favor those pattern recognition techniques that can explain, process and predict large volumes of visual data in realtime. Such techniques cannot be developed ‘‘in vitro’’ due to the physical constraints of the complex environment and the context in which these techniques are used. Further, these new methods need to achieve high detection, classification and recognition accuracies in real-time even when these are conflicting objectives. To make pattern recognition techniques viable for practical applications (such as surveillance, robotics and medical applications), considerations such as computational complexity reduction, hardware implementation, software optimization, and strategies for parallelizing solutions must be observed. This Special Issue of the Springer Journal of Real Time Image Processing entitled ‘‘Real-Time Image and Video Processing for Pattern Recognition Systems and Applications’’ is dedicated to Methods and Tools, Architectures, Platforms and Technologies, User Centered Case-Studies and Applications, and Theoretical Foundations that facilitates real-time image processing aided by fundamental pattern recognition methods. This Special Issue (SI) is oriented toward both theoretical and practical research and following the main theme of the journal, which is real time performance. Together with the contribution of the papers, trade-offs and future steps are discussed thoroughly in this SI. The call for papers resulted in 21 submissions. At least two reviewers assessed the quality of the papers, one guest editor and one editor-in-chief, and those meeting the top standards were sent to a second round of reviews. Finally, 11 papers were selected for publication. The papers discussed below can be divided broadly into the following thrusts. The first, concerns real-time tracking and motion estimation, and included five papers. The second thrust involves robotics real-time navigation and contains two papers. The third thrust has two papers and is about real-time image and video processing. The last two papers belong to the last thrust, involving medical application. The papers are summarized below: In the paper of the special issue, Q. Gu et al. entitled ‘‘High frame-rate tracking of multiple color-patterned objects’’ the authors present a high frame-rate vision system capable of tracking multiple color-patterned objects, based on color histogram-based models. Tracking is achieved by implementing an expanded cell based labeling algorithm as the hardware logic. The hardware implementation of the expanded cell-based labeling algorithm consists of building huecolor histograms of the objects of interest in an image, extracting statistical features (e.g. position, area, orientation) and using this information for tracking. The paper ‘‘A computationally efficient tracker with direct appearance kinematic measure and adaptive Kalman filter’’ by R. Ben-Ari and O. Ben-Shahar, presents motion tracking in real-time using low computational resources. The paper suggests a method capable of tracking in colour J. Wachs (&) M. Mejail B. Fishbain L. Alvarez Purdue University, West Lafayette, USA e-mail: jpwachs@purdue.edu
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.