In light of the rapid advancements in big data and artificial intelligence technologies, the trend of uploading local files to cloud servers to mitigate local storage limitations is growing. However, the surge of duplicate files, especially images and videos, results in significant network bandwidth wastage and complicates server management. To tackle these issues, we have developed a multi-parameter video quality assessment model utilizing a 3D convolutional neural network within a video deduplication framework. Our method, inspired by the analytic hierarchy process, thoroughly evaluates the effects of packet loss rate, codec, frame rate, bit rate, and resolution on video quality. The model employs a two-stream 3D convolutional neural network to integrate spatial and temporal streams for capturing video distortion details, with a coding layer configured to remove redundant distortion information. We validated our approach using the LIVE and CSIQ datasets, comparing its performance against the V-BLIINDS and VIDEO schemes across different packet loss rates. Furthermore, we simulated the client-server interaction using a subset of the dataset and assessed the scheme's time efficiency. Our results indicate that the proposed scheme offers a highly efficient solution for video quality assessment.