Transparent displays in public transport: a field evaluation of utility, usability, user experience and comfort
The accessibility of reliable travel information is a growing challenge in public transport as vehicles become increasingly automated and staff presence decreases. Transparent displays, integrated directly into windows, offer a novel way to present information in passengers’ natural line of sight. This study reports on a real-world field evaluation (N = 69) of such a system in a regional train, focusing on three dimensions: utility of displayed content, usability in terms of ergonomics and readability and passenger experience including comfort and technology acceptance. Results show that bright backgrounds and snow reduced legibility, while dusk and night-time improved Reading Performance. Despite these challenges, participants valued the novelty and relevance of the content and Overall Passenger Comfort was not negatively affected. Visual Reading Comfort improved with higher contrast conditions. Recommendations include automated contrast adjustment and optimised display placement. The study provides real-world evidence to guide ergonomic design and user experience standards.
- Research Article
8
- 10.1080/10447318.2016.1243928
- Oct 5, 2016
- International Journal of Human–Computer Interaction
ABSTRACTThe usability movement has historically always sought to empower end-users of computers so that they understand what is happening and can control the outcome. In this article, we develop and evaluate a “Textual Feedback” tool for usability and user experience (UX) evaluation that can be used to empower well-educated but low-status users in UX evaluations in countries and contexts with high power distances. The proposed tool contributes to the Human–Computer Interaction (HCI) community’s pool of localized UX evaluation tools. We evaluate the tool with 40 users from two socio-economic groups in real-life UX usability evaluations settings in Malaysia. The results indicate that the Textual Feedback tool may help participants to give their thoughts in UX evaluation in high power distance contexts. In particular, the Textual Feedback tool helps high status females and low status males express more UX problems than they can with traditional concurrent think aloud (CTA) alone. We found that classic concurrent think aloud UX evaluation works fine in high power contexts, but only with the addition of Textual Feedback to mitigate the effects of socio-economic status in certain user groups. We suggest that future research on UX evaluation look more into how to empower certain user groups, such as low status female users, in UX evaluations done in high power distance contexts.
- Conference Article
6
- 10.18293/seke2016-127
- Jul 1, 2016
Usability and UX (User eXperience) are some of the most important factors for evaluating the quality of mobile applications. They focus on how easy to use an application is and the emotions that such use evokes. However, these aspects are often evaluated separately in industry through different evaluation techniques. Although it is possible to identify more usability and UX problems by employing different UX and usability evaluation methods, this distributed approach may not be cost effective and may not allow to thoroughly explore the identified issues. In order to support the identification of both UX and usability problems in a single evaluation, we have proposed Userbility, an UX and usability inspection technique that allows evaluating these aspects in mobile applications. This paper presents an empirical study over the second version of Userbility to verify its feasibility. In this study, we compared Userbility with the UX and Usability Guidelines Approach (UUGA) that helps the evaluation of usability and UX separately in mobile applications. According to the quantitative results, considering efficiency, UUGA was better than the Userbility technique. However, the qualitative results suggest that Userbility pointed more improvement suggestions, which could be useful for redesigning the evaluated application. the emotions, perceptions and judgements of an application. Therefore, software development teams willing to increase the quality in use of the developed mobile applications need to evaluate both of them. To evaluate usability and UX together, in our previous work (13), we developed Userbility in order to support inspectors in the evaluation of both UX and usability in mobile applications at the same time, to assess whether Userbility can support inspectors in detecting usability and UX problems. Nascimento et al. (13) conducted a study with five mobile applications. The results showed that it is possible to identify improvements in applications, and allowed us to identify problems during the use of the technique. Based on this, in this paper, we proposed a new version of the technique and an empirical study to evaluate the feasibility of Userbility. We have compared the Userbility to an approach proposed by De Paula et al. (5), which evaluates UX and usability separately. The remainder of this paper is organized as follows. Section II presents a background on UX and usability evaluation techniques that can be applied to evaluate mobile applications. Then, Section III shows the Userbility technique in its second version. Section IV presents the empirical study where we compared Userbility with another evaluation approach. In Section V, we present the results of the empirical study. Finally, Section VI presents our conclusions and future work.
- Research Article
22
- 10.3390/info10120366
- Nov 25, 2019
- Information
This paper presents UXmood, a tool that provides quantitative and qualitative information to assist researchers and practitioners in the evaluation of user experience and usability. The tool uses and combines data from video, audio, interaction logs and eye trackers, presenting them in a configurable dashboard on the web. The UXmood works analogously to a media player, in which evaluators can review the entire user interaction process, fast-forwarding irrelevant sections and rewinding specific interactions to repeat them if necessary. Besides, sentiment analysis techniques are applied to video, audio and transcribed text content to obtain insights on the user experience of participants. The main motivations to develop UXmood are to support joint analysis of usability and user experience, to use sentiment analysis for supporting qualitative analysis, to synchronize different types of data in the same dashboard and to allow the analysis of user interactions from any device with a web browser. We conducted a user study to assess the data communication efficiency of the visualizations, which provided insights on how to improve the dashboard.
- Conference Article
2
- 10.1109/icoco53166.2021.9673558
- Nov 17, 2021
Recently, many studies are concentrating on mobile learning for deaf children, but they are not focusing on its user experience (UX) evaluation. Besides, existing UX evaluation models with recommended metrics are hard to be applied on evaluating mobile learning for deaf due to the comprehensive measurements and lack of description on how to conduct evaluation for a more specific mobile learning process. Moreover, existing UX evaluation metrics are not focused to be applied in evaluating UX deaf children's mobile learning. Therefore, this study has proposed twenty-seven UX evaluation metrics for deaf children's mobile learning that were derived from literature review. Hence, this paper is aimed at verifying these metrics through experts' reviews among UX domain practitioners, application developers, and deaf learning educators. A questionnaire containing twenty-seven metrics was distributed to the selected experts, and the responses were analysed using the Fuzzy Delphi Method (FDM) to determine expert consensus. based on the result, only twenty-four UX evaluation metrics were accepted, while the experts rejected the remaining three metrics. These verified UX evaluation metrics are expected to be used as a guidance to application developers and UX practitioners in developing good UX mobile learning that can improve the groundwork of learning for the deaf children.
- Book Chapter
2
- 10.1007/978-3-642-23768-3_130
- Jan 1, 2011
In a nutshell. This tutorial comprehensively covers important user experience (UX) evaluation methods and opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods, going way-beyond standard usability and user experience evaluation approaches in area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 10 years.Keywordsentertainmentuser experienceevaluation methodsbeyond usabilitygame
- Conference Article
20
- 10.1145/2639189.2639214
- Oct 26, 2014
Many methods and tools have been proposed to assess the User Experience (UX) of interactive systems. However, while researchers have empirically studied the relevance and validity of several UX evaluation methods, few studies only have explored expert-based evaluation methods for the assessment of UX. If experts are able to assess something as complex and inherently subjective as UX, how they conduct such an evaluation and what criteria they rely on, thus remain open questions. In the present paper we report on 33 UX experts performing a UX evaluation on 4 interactive systems. We provided the experts with UX Cards, a tool based on a psychological-needs driven approach, developed to support UX Design and Evaluation. Results are encouraging and show that UX experts encountered no major issues to conduct a UX evaluation. However, significant differences exist between individual elements that experts have reported on and the overall assessment they made of the systems.
- Research Article
22
- 10.1080/10447318.2024.2394724
- Sep 11, 2024
- International Journal of Human–Computer Interaction
Intelligent environments are rapidly gaining ground, propelled by a rich sensor infrastructure, the Internet of Things, sophisticated reasoning capabilities, and Artificial Intelligence. In this complex technological landscape, crafting usable intelligent environments and assessing the user experience (UX) demands a thorough understanding of the concepts involved and the parameters that need to be studied. This paper carries out a review of usability and UX evaluation methods and frameworks, elaborating on fundamental concepts and presenting in detail approaches and methods reported in the literature. It additionally examines evaluation approaches in adaptive and ubiquitous computing systems, which are closely associated with intelligent environments, and presents UX challenges and evaluation frameworks in intelligent environments. Finally, the findings are synthesized and consolidated to produce a comprehensive overview of the field and the challenges that lie ahead.
- Research Article
- 10.17762/turcomat.v12i3.958
- Apr 10, 2021
- Turkish Journal of Computer and Mathematics Education (TURCOMAT)
The term user experience (UX) emerged in the early 1990’s. Thenceforth, UX has become a key term for researchers to focus on aspects that go beyond usability and particularly in the field of Human Computer Interaction (HCI). The aim of this study is to analyse the bibliometric aspect of UX evaluation literature from Scopus database whereby 644 papers were extracted. The study utilised publishing or perishing software to collect the data, while VOSviewer was used to visualise the data. Data analysis was also carried out using SPSS and Microsoft Excel. The publication of articles between 2018 and 2019 increased to 117 articles in 2019 and this is the highest publication to date. Most of the publications are from journals and conferences, mainly in English. Based on the analysis of the co-occurrence map of all keywords in the articles published, the keywords frequently used by the authors are user experience (416) and user experience evaluation (155). Most of the research related to UX evaluation was conducted in United States; and the researchers prefer multi-authored publications. The co-authorship map of the journal’s authors showed that V. Roto is one of the dominant co-authorships. Other than that, Arnold P. O. S. Vermeeren is also the most cited author of UX evaluation in Scopus database. This study presents the history of scientific literature in user experience evaluation and will provide guidance for future research.
- Book Chapter
30
- 10.1007/978-3-642-03658-3_141
- Jan 1, 2009
High quality user experience (UX) has become a central competitive factor of product development in mature consumer markets [1]. Although the term UX originated from industry and is a widely used term also in academia, the tools for managing UX in product development are still inadequate. A prerequisite for designing delightful UX in an industrial setting is to understand both the requirements tied to the pragmatic level of functionality and interaction and the requirements pertaining to the hedonic level of personal human needs, which motivate product use [2]. Understanding these requirements helps managers set UX targets for product development. The next phase in a good user-centered design process is to iteratively design and evaluate prototypes [3]. Evaluation is critical for systematically improving UX. In many approaches to UX, evaluation basically needs to be postponed until the product is fully or at least almost fully functional. However, in an industrial setting, it is very expensive to find the UX failures only at this phase of product development. Thus, product development managers and developers have a strong need to conduct UX evaluation as early as possible, well before all the parts affecting the holistic experience are available. Different types of products require evaluation on different granularity and maturity levels of a prototype. For example, due to its multi-user characteristic, a community service or an enterprise resource planning system requires a broader scope of UX evaluation than a microwave oven or a word processor that is meant for a single user at a time. Before systematic UX evaluation can be taken into practice, practical, lightweight UX evaluation methods suitable for different types of products and different phases of product readiness are needed. A considerable amount of UX research is still about the conceptual frameworks and models for user experience [4]. Besides, applying existing usability evaluation methods (UEMs) without adaptation to evaluate UX may lead to some scoping issues. Consequently, there is a strong need to put UX evaluation from research into practice.
- Conference Article
19
- 10.1145/2851581.2856683
- May 7, 2016
In a nutshell: This course comprehensively covers important user experience (UX) evaluation methods as well as opportunities and challenges of UX evaluation in the area of entertainment and games. The course is an ideal forum for attendees to gain insight into state-of-the art user experience evaluation methods going way-beyond standard usability and user experience evaluation approaches in the area of human-computer interaction. It surveys and assesses the efforts of user experience evaluation of the gaming and human computer interaction communities during the last 15 years.
- Book Chapter
- 10.1007/978-3-642-02806-9_18
- Jan 1, 2009
The principal objective of this paper is to demonstrate the APRICOT methodology that aims to streamline and increase the effectiveness of user experience initiatives within a development project and in the final solution. User Experience (UE) evaluations, both heuristic based and usability testing based are important skills and a crucial part of a practitioner’s tool kit. They showcase the inadequacies in an application or system. Close inspection of projects which have used User Experience evaluations reveal that only a small percentage of User Experience recommendations actually make it into the final product. This substantially reduces the ROI for User Experience contribution. The APRICOT concept is work in progress and aims to make User Experience evaluation more effective by better integrating UE practitioners and aligning the processes and methodology with one used by development teams.
- Research Article
- 10.17671/gazibtd.842888
- Jan 31, 2022
- Bilişim Teknolojileri Dergisi
Nowadays, there is a growing interest in User Experience (UX) evaluation in Human-Computer Interaction (HCI) field to evaluate information systems. Meanwhile, the application of neuroscientific measurement tools in user experience studies is constantly increasing. Within the scope of this study, a systematic mapping is conducted on the use of electroencephalography (EEG), one of the neuroscientific measurement tools, in UX evaluation studies published between 2010-2020. 89 studies are gathered from Web of Science (WoS) database, Science Direct (Elsevier), IEEE Xplore Library, ACM Digital Library, according to the scope of the study, are examined. The aim of this study is to reveal the trends and the use of EEG with the other UX evaluation methods in UX evaluation research. According to the results of the study, the types of data collected by EEG for UX evaluation are emotion and attention, data is generally collected as single episodic experience. In addition, the support vector machine is used for the classification and event-related potential is used for the feature extraction of the EEG data.
- Research Article
72
- 10.1016/j.heliyon.2020.e03917
- May 1, 2020
- Heliyon
A User Interface (UI) and User eXperience (UX) evaluation framework for cyberlearning environments in computer science and software engineering education
- Conference Article
3
- 10.1109/icoict55009.2022.9914895
- Aug 2, 2022
79% of Micro, Small, and Medium Enterprises (MSMEs) in Indonesia have not used E-commerce. One of the reasons is the difficulty of using E-commerce. Therefore, an evaluation of usability and user experience was carried out on one of Indonesia's most popular e-commerce sites, namely Shopee. This study uses a mixed-method, namely quantitative and qualitative data collection. Quantitative data was collected through the System Usability Scale (SUS) and User Experience Questionnaire (UEQ) instruments. Meanwhile, qualitative data were obtained to determine the perspective of MSME regarding the use of E-marketplaces, especially Shopee. This study involved 309 MSMEs in Indonesia, of which 7 MSMEs stopped using Shopee and 98 active Shopee users to date. In this study, it was concluded that the usability of the Shopee E-marketplace was above average. Meanwhile, in terms of user experience, Shopee's hedonic quality is at a perfect level. However, in pragmatic quality, Shopee's E-marketplace is below average.
- Research Article
51
- 10.3389/frobt.2018.00106
- Sep 12, 2018
- Frontiers in Robotics and AI
This paper introduces Augmented Reality (AR) system to support an astronaut's manual work, it has been developed in two phases. The first phase was developed in Europeans Space Agency's (ESA) project called “EdcAR—Augmented Reality for Assembly, Integration, Testing and Verification, and Operations” and the second phase was developed and evaluated within the Horizon 2020 project “WEKIT—Wearable Experience for Knowledge Intensive Training.” The main aim is to create an AR based technological platform for high knowledge manual work support, in the aerospace industry with reasonable user experience. The AR system was designed for the Microsoft HoloLens mixed reality platform, and it was implemented based on a modular architecture. The purpose of the evaluation of the AR system is to prove that reasonable user experience of augmented reality can reduce performance errors while executing a procedure, increase memorability, and improve cost, and time efficiency of the training. The main purpose of the first phase evaluation was to observe and get feedback from the AR system, from user experience point-of-view for the future development. The use case was a filter change in International Space Station (ISS)—Columbus mock-up in the ESA's European Astronaut Centre (EAC). The test group of 14 subjects it included an experienced astronaut, EAC trainers, other EAC personnel, and a student group. The second phase the experiment consisted of an in-situ trial and evaluation process. The augmented reality system was tested at ALTEC facilities in Turin, Italy, where 39 participants were performing an actual real astronaut's procedure, the installation of Temporary Stowage Rack (TSR) on a physical mock-up of an ISS module. User experience evaluation was assessed using comprehensive questionnaires, and interviews, gathering an in-depth feedback on their experience with a platform. This focused on technology acceptance, system usability, smart glasses user satisfaction, user interaction satisfaction, and interviews, gathering an in-depth feedback on their experience with a platform. The analysis of the questionnaires and interviews showed that the scores obtained for user experience, usability, user satisfaction, and technology acceptance were near the desired average. Specifically, The System Usability Scale (SUS) score was 68 indicating that the system usability is already nearly acceptable in the augmented reality platform.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.