Blind Image Quality Assessment for Remote Sensing Applications has garnered a significant amount of interest from the community, presenting a problem that is both persistent and difficult to solve. The current quality indicators are, for the most part, in agreement with each individual's subjective perception. Traditional handcrafted quality measures focus on detecting low-level features such as contour, edge, color, texture, and shape. These are the attributes that will be measured. Nevertheless, it is possible that they will disregard the essential semantics that lie behind the warped image. Within the scope of popular deep learning, the process of acquiring multilayer features is a straightforward exercise. The unfortunate reality is that a significant number of these models either ignore superficial traits or focus only on high-level variables, which ultimately leads to poor prediction performance. In order to successfully portray varying degrees of distortion, the fusion features make advantage of the laws that govern the human visual system in order to extract both local and global components of information. In conclusion, the ultimate quality score is determined by considering the features of both the local and global levels. In this investigation, we developed two different NR-IQA methods. In one, a markov random field is used, while in the other, a sparse approximation variational autoencoder is established as the foundation. The effectiveness of our work is shown by the evaluation of multiple IQA datasets, where our model consistently generates findings that are at the leading edge of the field. This evaluation does not make use of any quality standards; rather, it compares it against a variety of well-known cutting-edge techniques and average opinion ratings based on the opinions of different people. Quality ratings are accurately predicted by the regression models that have been provided.
Read full abstract