In the context of the burgeoning progression of wireless network technology and the corresponding escalation in the demand for mobile Internet-based multimedia transmission services, the task of preserving and augmenting user satisfaction has emerged as an imperative concern. This necessitates a sophisticated and accurate evaluation of multimedia service quality within the sphere of wireless networks. To systematically address the nuanced issue of user experience quality, the present study introduces a novel method for evaluating multimedia Quality of Experience (QoE) in wireless networks, employing an advanced deep learning model as the underlying analytical framework. Initially, the research undertakes the task of modeling the video session process, giving due consideration to the status of each temporal interval within the session's architecture. Subsequently, the challenge of QoE prediction is dissected and investigated through the lens of recurrent neural networks (RNNs), culminating in the proposition of an all-encompassing QoE prediction model that harmoniously integrates video information, Quality of Service (QoS) data, user behavior analytics, and facial expression analysis. The empirical segment of this research serves to validate the efficacy of the suggested video QoE evaluation method, engaging both quantitative and qualitative comparison metrics with contemporaneous state-of-the-art QoE models, employing the RTVCQoE dataset as the empirical foundation. The experimental findings illuminate that the QoE model elucidated in this study transcends competing models in performance metrics such as PLCC, SRCC, and KRCC. Consequently, this investigation stands as a seminal contribution to academic literature, furnishing an exacting and dependable QoE evaluation methodology. Such a contribution augments the user experience landscape in multimedia services within wireless networks, and instigates further scholarly exploration and technological innovation in the mobile Internet domain.
Read full abstract