Abstract

Visual complexity is an important factor affecting the efficiency and functionality of user interfaces. Its impact on the user's impression and the usability is significant, especially for mobile applications with constraints such as layout size, on screen keys and small input fields. Conventional approaches for visual complexity evaluation of user interfaces are either based on user evaluations with surveys or based on pre-specified formal metrics or on heuristics. Alternatively, in this study, we have explored the effectiveness of deep learning models for visual complexity evaluation, specifically, of mobile user interfaces. We have experimented with five state of the art pre-trained deep learning models known to be effective for computer vision tasks, namely, VGG16, DenseNet121, MobileNetv2, GoogleNet and ResNet152 were trained with 3635 different mobile user interface images as login, menu, search and settings. Furthermore, in order to validate the effectiveness of this approach, a new validation dataset and survey application was developed and an evaluation study was conducted with 98 participants where 7309 comparison result were obtained from the study. It was found that the agreement rate between the results of deep learning models and the user evaluations was up to 78% and 74% on the average. The high to moderate agreement rate between the results of deep learning models and the user evaluations reveals that this approach can be useful for designers in visual complexity evaluation of mobile user interfaces.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call