Abstract

ABSTRACT The dynamic video frame dataset’s automated feature analysis addresses the complexity of intensity mapping with normal and abnormal classes. Iterative modelling is needed to learn the component of a video frame in several patterns for various video frame data types for threshold-based data clustering and feature analysis. GWO optimises the Convoluted Pattern of Wavelet Transform (CPWT) feature vectors employed in this paper's CNN feature analysis technique. A median filter reduces noise and smooths the video frame before normalising it. Edge information represents the video frame's bright spot boundary. Neural network based video frame classification clusters pixels using feature recurrent learning with minimal dataset training. The filtered video frame's features were evaluated using complex wavelet transformation feature extraction algorithms. These features demonstrate video frame spatial and textural classifications. CNN classifiers help analyse video frame instances and classify action labels. Categorization improves with the fewest training datasets. This strategy may be beneficial if compared to optimal practises.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.