Abstract
In light of the rapid advancements in big data and artificial intelligence technologies, the trend of uploading local files to cloud servers to mitigate local storage limitations is growing. However, the surge of duplicate files, especially images and videos, results in significant network bandwidth wastage and complicates server management. To tackle these issues, we have developed a multi-parameter video quality assessment model utilizing a 3D convolutional neural network within a video deduplication framework. Our method, inspired by the analytic hierarchy process, thoroughly evaluates the effects of packet loss rate, codec, frame rate, bit rate, and resolution on video quality. The model employs a two-stream 3D convolutional neural network to integrate spatial and temporal streams for capturing video distortion details, with a coding layer configured to remove redundant distortion information. We validated our approach using the LIVE and CSIQ datasets, comparing its performance against the V-BLIINDS and VIDEO schemes across different packet loss rates. Furthermore, we simulated the client-server interaction using a subset of the dataset and assessed the scheme's time efficiency. Our results indicate that the proposed scheme offers a highly efficient solution for video quality assessment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.