Recent advances in intelligent surveillance to boost public security have increased interest in anomaly location and identification among computer vision researchers in congested situations. Unrestricted video monitoring is required due to its extensive use in private and public spheres. Particularly, research on finding anomalies in surveillance videos has drawn much interest. Video condensation is an efficient method for quickly scanning and retrieving lengthy surveillance films. Reordering the video events in the spatial or temporal domain seeks to condense large video sequences into their equivalent compact video representation. Additionally, if a delay occurs in anomaly detection, it leads to time consumption for rectifying it. Therefore, the core aim of this task is to reduce the time consumption of detecting the instance of video anomaly detection faster by introducing the optimal object stitching through the deep learning-based video condensation concept. The steps include video condensation and deep learning-based anomaly detection. Initially, the necessary video for anomaly detection is gathered from the datasets. The gathered video is converted into frames and is used for object detection in each frame using Improved YOLOv5 (I-YOLOv5). Then, the optimal frame selection is performed using the hybrid Fitness of Wild Geese and Path Finder (FWGPF) for stitching the detected objects into the selected optimal frames, thus compressing the relevant frames. In the second phase, the condensed (compressed) frames are utilized for determining the anomaly frames with the support of Parameter Tuning in Dilated MobileNet (PTDM), and further, the anomaly objects are detected on the classified condensed frames based on the I-YOLOv5. Here, the tuning of constraint is done in the anomaly detection phase by the same hybrid FWGPF to reduce the time consumption. The experimental analysis of the developed model is performed with baseline algorithms and detection techniques to confirm the developed model’s efficiency.