The Promise and Pitfalls of Web Accessibility Overlays for Blind and Low Vision Users

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Web accessibility is essential for ensuring that all individuals, regardless of their physical or cognitive abilities, can access and effectively use the internet. This principle is fundamental as digital platforms increasingly become primary channels for education, communication, commerce, and entertainment. Our study critically evaluates the effectiveness of accessibility overlays, which are third-party tools that claim to enhance website usability for people with disabilities. Specifically, we focused on the experiences of blind and low-vision users, who are disproportionately impacted by poor web accessibility. Through a combination of online surveys and interviews, we engaged with participants who employ a variety of assistive technologies to navigate the web. The empirical evidence gathered paints a troubling picture: despite their intended purpose, accessibility overlays often fail to deliver on their promises and, in many cases, increase existing challenges. Participants frequently reported that these overlays conflicted with their assistive technologies, leading to reduced functionality and increased frustration. This points to a significant misalignment between the design of these tools and the real-world needs of users. The study highlights the pressing need to move away from superficial technological fixes and towards deeper, more meaningful engagement with the needs of disabled users. This involves embracing user-centered design practices that integrate accessibility considerations from the ground up, ensuring that digital environments are truly inclusive. By prioritizing comprehensive, well-integrated solutions over patches like overlays, we can foster a more accessible and equitable digital landscape.

Similar Papers
  • Conference Article
  • Cite Count Icon 24
  • 10.1145/3544548.3581302
ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users
  • Apr 19, 2023
  • Vishnu Nair + 2 more

Blind and low vision (BLV) users often rely on alt text to understand what a\ndigital image is showing. However, recent research has investigated how\ntouch-based image exploration on touchscreens can supplement alt text.\nTouchscreen-based image exploration systems allow BLV users to deeply\nunderstand images while granting a strong sense of agency. Yet, prior work has\nfound that these systems require a lot of effort to use, and little work has\nbeen done to explore these systems' bottlenecks on a deeper level and propose\nsolutions to these issues. To address this, we present ImageAssist, a set of\nthree tools that assist BLV users through the process of exploring images by\ntouch -- scaffolding the exploration process. We perform a series of studies\nwith BLV users to design and evaluate ImageAssist, and our findings reveal\nseveral implications for image exploration tools for BLV users.\n

  • Conference Article
  • Cite Count Icon 29
  • 10.1145/3544548.3581532
Exploring Chart Question Answering for Blind and Low Vision Users
  • Apr 19, 2023
  • Jiho Kim + 3 more

Data visualizations can be complex or involve numerous data points, making them impractical to navigate using screen readers alone. Question answering (QA) systems have the potential to support visualization interpretation and exploration without overwhelming blind and low vision (BLV) users. To investigate if and how QA systems can help BLV users in working with visualizations, we conducted a Wizard of Oz study with 24 BLV people where participants freely posed queries about four visualizations. We collected 979 queries and mapped them to popular analytic task taxonomies. We found that retrieving value and finding extremum were the most common tasks, participants often made complex queries and used visual references, and the data topic notably influenced the queries. We compile a list of design considerations for accessible chart QA systems and make our question corpus publicly available to guide future research and development.

  • Conference Article
  • Cite Count Icon 23
  • 10.1145/3411763.3451810
Automated Video Description for Blind and Low Vision Users
  • May 8, 2021
  • Aditya Bodi + 8 more

Video accessibility is crucial for blind and low vision users for equitable engagements in education, employment, and entertainment. Despite the availability of professional description services and tools for amateur description, most human-generated descriptions are expensive and time consuming, and the rate of human-generated descriptions simply cannot match the speed of video production. To overcome the increasing gaps in video accessibility, we developed a system to automatically generate descriptions for videos and answer blind and low vision users’ queries on the videos. Results from a pilot study with eight blind video aficionados indicate the promise of this system for meeting needs for immediate access to videos and validate our efforts in developing tools in partnership with the individuals we aim to benefit. Though the results must be interpreted with caution due to the small sample size, participants overall reported high levels of satisfaction with the system, and all preferred use of the system over no support at all.

  • Research Article
  • Cite Count Icon 5
  • 10.1016/j.infsof.2024.107518
Are your apps accessible? A GCN-based accessibility checker for low vision users
  • Jun 24, 2024
  • Information and Software Technology
  • Mengxi Zhang + 5 more

Are your apps accessible? A GCN-based accessibility checker for low vision users

  • Conference Article
  • Cite Count Icon 4
  • 10.1145/3597638.3614490
Understanding Blind and Low Vision Users' Attitudes Towards Spatial Interactions in Desktop Screen Readers
  • Oct 22, 2023
  • Arnavi Chheda-Kothary + 5 more

Desktop screen readers as a web navigation mechanism for BLV users are tedious and frozen in time, especially in the face of richer ways of presenting spatial information such as tactile and touchscreen devices. In our work, we consider what it means to create and evaluate systems that can present a similarly rich, spatial interaction mechanism plugged into existing screen reader paradigms. We present a formative study conducted with SpaceNav, a custom screen reader that utilizes spatial input and output to navigate two different web applications. We present results from this study, and discuss a new browser extension we are implementing based on our formative study feedback to more robustly test spatial interactions in the context of real world websites. To close, we describe our goals for evaluating the new web extension in a future study.

  • Book Chapter
  • 10.1007/978-3-319-29498-8_27
Preliminary Findings from an Information Foraging Behavioural Study Using Eye Tracking
  • Jan 1, 2016
  • J Chakraborty + 2 more

Cognitive overload can be a serious impediment in the assimilation of information for all types of users. Research has demonstrated the usefulness of adaptive interfaces in reducing cognitive overload by providing an interface that automatically reacts to the end users’ information foraging behavior. In order to understand and compare behaviors and patterns (between sighted and low vision users), it is necessary to understand the information seeking behavior of sighted users for any patterns that may exist as a baseline. These findings can then be compared to data on low vision users in a future study. In this study, eye tracking is used to explore information seeking behavior of visual users. In particular, we compare the gaze patterns of users when using both a traditional interface and complex interface to identify current events of interest. The eye tracking data was analyzed using kernel density statistics and correlation analysis to determine if relationships exist between information seeking behavior, task completion and accuracy. Results show that information seeking behavior tends to be more efficient and accurate when using the traditional interface and that a more complex interface introduces additional cognitive overload.

  • Research Article
  • Cite Count Icon 49
  • 10.1016/j.intcom.2011.05.005
Barriers common to mobile and disabled web users
  • May 23, 2011
  • Interacting with Computers
  • Yeliz Yesilada + 2 more

Barriers common to mobile and disabled web users

  • Research Article
  • Cite Count Icon 9
  • 10.1145/3546747
OneButtonPIN: A Single Button Authentication Method for Blind or Low Vision Users to Improve Accessibility and Prevent Eavesdropping
  • Sep 19, 2022
  • Proceedings of the ACM on Human-Computer Interaction
  • Manisha Varma Kamarushi + 3 more

A Personal Identification Number (PIN) is a widely adopted authentication method used by smartphones, ATMs, etc. PINs offer strong security and can be reset when compromised (unlike biometric authentication). However, PINs can be inaccessible for blind or low vision (BLV) users due to screen readers voicing PINs to bystanders or potential shoulder surfing attack risks---bystanders could watch the PIN being entered without the user noticing. To address this, we present OneButtonPIN, an interface to improve PIN entry accessibility and security for BLV users. Here, a single on-screen button, when pressed and held, triggers a haptic vibration sequence. A digit is entered by counting the vibrations and releasing the button. We explored introducing random timings to the vibration sequence to increase security. A week-long evaluation with 9 BLV participants and a security study with 10 sighted participants acting as shoulder surfers demonstrated OneButtonPIN's usability and resilience against eavesdropping.

  • Research Article
  • Cite Count Icon 1
  • 10.1167/jov.23.15.18
Invited Session IV: Extended reality--applications in vision science and beyond: Augmented reality systems for people with low vision.
  • Dec 1, 2023
  • Journal of vision
  • Yuhang Zhao

Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. While current low vision aids (e.g., magnifier, CCTV) support basic vision enhancements, such as magnification and contrast enhancement, these enhancements often arbitrarily alter a user's full field of view without considering the user's context, such as their visual abilities, tasks, and environmental factors. As a result, these low vision aids are not sufficient or preferred by low vision users in many important tasks. Augmented reality (AR) technology presents a unique opportunity to enhance low vision people's visual experience by automatically recognizing the surrounding environment and presenting tailored visual augmentations. In this talk, I will talk about how we design and build intelligent AR systems to support low vision people in visual tasks, such as a head-mounted AR system that presents visual cues to orient users' attention in a visual search task, as well as a projection-based AR system that projects visual highlights on the stair edges to support safe stair navigation. I will conclude my talk by discussing our future research direction on AR for low vision accessibility.

  • Research Article
  • Cite Count Icon 3
  • 10.1080/10447318.2021.1952802
An Empirical Comparison between the Effects of Normal and Low Vision on Kinematics of a Mouse-Mediated Pointing Movement
  • Jul 31, 2021
  • International Journal of Human–Computer Interaction
  • Yuenkeen Cheong + 2 more

Vision problem is affecting many Americans today. While there are several pioneering studies that examine computer input tasks performed by people with low vision, most focus on aggregate measures of performance, such as total task time. To provide a more detailed analysis of low vision user performance, we captured kinematics of pointing movements with the goal of determining the effect of low vision on the process of the movement. Ten participants were recruited to form a sighted and a low vision group. After controlling for the effects of age and psychomotor ability, differences in movement performance and kinematics between the two groups were compared. As expected, longer movement times were observed among low vision participants. When the movement was parsed into primary (i.e., initial phase) and secondary (i.e., homing phase) submovements, the kinematics of the primary submovement were similar for the two groups. However, low vision participants were found to spend more time in the secondary submovement. The effect of visual condition was amplified when a low vision participant had to move the cursor over longer distances. These findings suggest that for computing tasks requiring mouse-mediated pointing, task improvements focused on the secondary movement (i.e. homing phase) would benefit low vision users. Improving performance during homing phase could result in the overall improvement of performance. These results could also be useful to guide the development of adaptive and individualized assistive technology to assist users acquire intended targets. These results could also be useful to guide the development of adaptive and individualized assistive technology to assist users acquire intended targets.

  • Research Article
  • 10.1080/17483107.2025.2544942
Exploring the use of smartphone applications during navigation-based tasks for individuals who are blind or who have low vision: future directions and priorities
  • Aug 25, 2025
  • Disability and Rehabilitation: Assistive Technology
  • Maxime Bleau + 4 more

Purpose Mainstream smartphone applications are increasingly replacing the use of traditional visual aids to facilitate independent travel for people with blindness or low vision. However, little is known about which navigation apps are being used, the factors underpinning these decisions and why apps are not used in certain contexts. The goal of this study was to explore the navigation-based apps used by individuals who are blind or who have low vision, the factors influencing these decisions, and perceptions about gaps to address future needs in navigation. Materials and Methods An international online survey was conducted with 139 participants who self-identified as blind or low vision. Results Findings indicate that the decision to use an app based on artificial intelligence (AI) versus live video assistance is related to whether the task is dynamic or static in nature. Although most participants rely on apps only during unfamiliar routes (60.9%), apps are shown to supplement rather than replace traditional tools such as the white cane and dog guide. Participants underscore the need for future apps to better assist with indoor navigation and to provide more precise information about points of interest (POI). Conclusion These results provide vital insights for technology developers about the perceived utility of smartphone apps for people with low vision or blindness during navigation. Our results highlight the importance of built-in accessibility features for users with visual impairments. As additional technology-based solutions are developed, it is essential that blind and low vision users, including rehabilitation professionals, are meaningfully included within design.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/icimu49871.2020.9243565
An Empirical Study to Evaluate the Accessibility of Arabic Websites by Low Vision Users
  • Aug 24, 2020
  • Muhammad Akram + 1 more

Empirical study to identify the web accessibility issues in the Arabic version of websites can play a vital role to improve the quality of the website. World Health Organization (WHO) reported that more than one billion people having a different kind of disability. United Nation (U.N.) assembly prepared and passed a treaty in 2006 to protect and provide the rights of people with disability. The article 9 of treaty enforce all countries to identify and overcome the difficulties which hurdle the disabled people from accessing their environment, transportation, public facilities, services and information and communication technologies (ICT). Web content accessibility guidelines (WCAG) exists since the last two decades but still disable users are not able to benefit from the services provided by the website adequately. The research team found some web accessibility evaluation studies which are conducted mostly in western countries by involving the disable users for task-based evaluation for the English version of websites. However, our knowledge is quite low about the problems faced by disabled users of Arabic websites. To the best of our knowledge, this is the first research study which applied on the Arabic version of web sites to get the empirical evidence by involving the disable users in accessibility audit process. In this study, five Saudi Ministry web sites selected for accessibility evaluation based on their frequency of usage. Twenty-five low-vision participants participated in this study. Each participant was given a set of task to perform on each chosen website. Participants in this study were asked to complete the task and rate the overall level of difficulty to accomplish the task on five-level scales. Problems faced by participants recorded, and the difficulty level to overcome each problem rated on five-level scales. After completing the task, each participant rated the level of compliance of the website using five-level scales with the web content accessibility guidelines 2.0. The study concluded that selected Arabic websites are not designed fully by applying the existing WCAG 2.0. However, many accessibility problems faced by disable user are complex, which cannot be addressed only by the current checklist. The conclusion enforces the involvement of disable user is significant in design and evaluation to improve the accessibility. Moreover, the research team believes that empirical evidence generated by this research study is an addition to the current body of accessibility evidence.

  • Conference Article
  • Cite Count Icon 9
  • 10.1145/3373625.3417997
Ensuring Accessibility: Individual Video Playback Enhancements for Low Vision Users
  • Oct 26, 2020
  • Andreas Sackl + 3 more

Although software products are becoming increasingly accessible and assistive tools like screen readers becoming widely available, people with low vision still face insufficient support when it comes to consumption of digital video content. In this paper, we present an accessible desktop video player software, which allows people with low vision to adapt the presentation of digital videos according to their specific needs. For visual enhancement, we implemented a broad range of image manipulation techniques, like adaptation of contrast, color manipulation (e.g. inverting, grey scale transformations) and edge detection algorithms for sharpness optimization. Based on the feedback from low vision users, we discuss how to implement enhancement filter configuration and how to consider several input modalities.

  • Research Article
  • Cite Count Icon 18
  • 10.1089/cyber.2019.0409
The Accessibility of Commercial Off-The-Shelf Virtual Reality for Low Vision Users: A Macular Degeneration Case Study.
  • Feb 24, 2020
  • Cyberpsychology, Behavior, and Social Networking
  • Wendy Powell + 2 more

Virtual reality (VR) is demonstrating increasing potential for therapeutic benefit in elderly care, but it is still generally considered to be the domain of the visually unimpaired. Even where VR and augmented reality (AR) are being explored for use with low vision, it is generally with a focus on creating bespoke software and hardware. However, the properties of commercial off-the-shelf (COTS) headsets, such as high luminance, may render them accessible even to very low vision users. Using a case-study approach, we explored the differences in visual perception from baseline to pass-through AR and commercial VR applications for an elderly female (Mrs. M) with advanced age-related macular degeneration. We found notable improvements in object, face, and color recognition, particularly with higher display brightness. Furthermore, Mrs. M was able to engage fully and enthusiastically with a number of (unmodified) VR applications, providing detailed descriptions of both static and moving elements. We suggest that the high luminance available in COTS VR may support more stable fixation closer to the fovea, improving visual resolution. Furthermore, the improvements we noted in color perception support previous suggestions that increasing luminance may improve photosensitivity by reducing the uptake of limited oxygen by the rod cells. We conclude that low vision should not automatically preclude users from engaging in VR research or entertainment, and that they may be able to use well-illuminated VR applications without any special modifications.

  • Book Chapter
  • Cite Count Icon 29
  • 10.1007/3-540-45491-8_99
Evaluation of Long Descriptions of Statistical Graphics for Blind and Low Vision Web Users
  • Jan 1, 2002
  • H K Ault + 4 more

The objective of this research was to maximize not only accessibility but also user comprehension of web pages, particularly those containing tabular and graphical information. Based on literature and interviews with blind and low vision students and their teachers, the research team developed guidelines for web developers to describe charts and graphs commonly used in statistical applications. A usability study was then performed to evaluate the effectiveness of these new guidelines. Accessibility and comprehension for both blind and low vision users were increased when web pages were developed following the new guidelines.KeywordsLesson PlanScreen ReaderBlind UserAccessibility GuidelineWorcester Polytechnic InstituteThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Save Icon
Up Arrow
Open/Close