Beadwork Bridge: Understanding and Exploring the Opportunities of Beadwork in Enriching School Education for Blind and Low Vision (BLV) People
Tactile perception is a crucial channel for education in individuals with blindness and low vision (BLV), and beadwork is a low-cost and widely adopted tool in their educational practices. In this paper, we aim to explore what the field of Human-Computer Interaction (HCI) can learn from beadwork practices in relation to educational somatic experiences and tangible interaction. To understand how beadwork practices are enacted, we conducted in-class observations, semi-structured interviews, and focus groups with BLV students and teachers. Our results suggest that beadwork is an effective tool to foster personal development (e.g., mathematical and creativity skills) and social engagement (e.g., career development). Based on our findings, we offer insights into how beadwork can serve as a cost-effective material for HCI, particularly in the context of embodied cognition and soma design. Finally, we propose how state-of-the-art technology could be integrated to optimize the overall process.
- Research Article
19
- 10.1145/3555570
- Nov 7, 2022
- Proceedings of the ACM on Human-Computer Interaction
Blind and low vision people use visual description services (VDS) to gain visual interpretation and build access in a world that privileges sight. Despite their many benefits, VDS have many harmful privacy and security implications. As a result, researchers are suggesting, exploring, and building obfuscation systems that detect and obscure private or sensitive materials. However, as obfuscation depends largely on sight to interpret outcomes, it is unknown whether Blind and low vision people would find such approaches useful. Our work aims to center the perspectives and opinions of Blind and low vision people on the potential of obfuscation to address privacy concerns in VDS. By reporting on interviews with 20 Blind and low vision people who use VDS, our findings reveal that popular research trends in obfuscation fail to capture the needs of Blind and low vision people. While obfuscation might be helpful in gaining more control, tensions around obfuscation misrecognition and confirmation are prominent. We turn to the framework of interdependence to unpack and understand obfuscation in VDS, enabling us to complicate privacy concerns, uncover the labor of Blind and low vision people, and emphasize the importance of safeguards. We provide design directions to move the trajectory of obfuscation research forward.
- Conference Article
2
- 10.1145/3544549.3585819
- Apr 19, 2023
Self-service terminals (SSTs) are almost everywhere in our daily life and increasingly use capacitive and infrared touchscreens as the interface. Most of the current solutions to help blind and low vision (BLV) people access existing touchscreens mostly are only suitable for capacitive touchscreens and not for infrared touchscreens. In this paper, we proposed a voice-based interactive method using a conductive folding stand with the phone camera to allow BLV people to access both touchscreens of SSTs. Voice feedback was provided to guide users to move the phone close to the button and touch it with the end of the unfolded stand. Using a portable accessory, this method directly guided users to touch the target and effectively avoids false triggering. A preliminary evaluation indicated that our approach enabled users to access the target buttons on the touchscreen with high accuracy and a short completion time.
- Research Article
- 10.1145/3770654
- Dec 2, 2025
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Screen readers are audio-based software that Blind and Low Vision (BLV) people use to interact with computing devices, such as tablets and smartphones. Although this technology has significantly improved the accessibility of touchscreen devices, the sequential nature of audio limits the bandwidth of information users can receive and process. We introduce TapNav, an adaptive spatiotactile screen reader prototype developed to interact with touchscreen interfaces spatially. TapNav's screen reader provides adaptive auditory feedback that, in combination with a tactile overlay, conveys spatial information and location of interface elements on-screen. We evaluated TapNav with 12 BLV users who interacted with TapNav to explore a data visualization and interact with a bank transactions application. Our qualitative findings show that touch points and spatially constrained navigation helped users anticipate outcomes for faster exploration, and offload cognitive load to touch. We provide design guidelines for creating tactile overlays for adaptive spatiotactile screen readers and discuss their generalizability beyond our exploratory data analysis and everyday application navigation scenarios.
- Research Article
13
- 10.1145/3167902.3167905
- Nov 27, 2017
- ACM SIGACCESS Accessibility and Computing
While our community has many active projects involving blind people, low vision is rarely addressed. People with low vision have functional vision, but their visual impairment adversely affects their daily life and it cannot be corrected with glasses or contact lenses. Over the last few years, we have been conducting research with this understudied demographic: understanding low vision people's needs and designing applications to address the challenges they face. In this article, we discuss our ongoing research in this area, focusing on designing augmented reality applications for low vision users. We begin this article by describing low vision and motivating our focus on augmented reality applications on smartglasses for low vision people. We then provide overviews of three research projects that exemplify our research agenda: a study where we observed low vision people conducting a navigation and shopping task, a study where we examined low vision people's perception of virtual text and shapes on smartglasses, and the design of a smartglasses application that facilitates a visual search task.
- Conference Article
86
- 10.1145/3025453.3025949
- May 2, 2017
People with low vision have a visual impairment that affects their ability to perform daily activities. Unlike blind people, low vision people have functional vision and can potentially benefit from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on mainstream commercial augmented reality (AR) glasses, despite their visual limitations and the device's constraints. We conducted a study with 20 low vision participants and 18 sighted controls, asking them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. We found that low vision participants were able to identify basic shapes and read short phrases on the glasses while sitting and walking. Identifying virtual elements had a similar effect on low vision and sighted people's walking speed, slowing it down slightly. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision people and discuss opportunities for accessibility applications on this platform.
- Conference Article
16
- 10.1145/3544548.3581213
- Apr 19, 2023
While being able to read with screen magnifiers, low vision people have slow and unpleasant reading experiences. Eye tracking has the potential to improve their experience by recognizing fine-grained gaze behaviors and providing more targeted enhancements. To inspire gaze-based low vision technology, we investigate the suitable method to collect low vision users’ gaze data via commercial eye trackers and thoroughly explore their challenges in reading based on their gaze behaviors. With an improved calibration interface, we collected the gaze data of 20 low vision participants and 20 sighted controls who performed reading tasks on a computer screen; low vision participants were also asked to read with different screen magnifiers. We found that, with an accessible calibration interface and data collection method, commercial eye trackers can collect gaze data of comparable quality from low vision and sighted people. Our study identified low vision people’s unique gaze patterns during reading, building upon which, we propose design implications for gaze-based low vision technology.
- Research Article
1
- 10.70222/hres23
- Jul 25, 2024
- HerculeanResearch
This article introduces the “Mobile Bio-Eye-Tronic System” which is an artificial vision system for the impaired (blind) people and also low vision (sight loss) people. “Mobile Bio-Eye-Tronic System” is a completely original project and unique to the author of this article. There are 45 million visually impaired (blind) people in the world and 135 million low vision (sight loss) people. 60% of blindness in the world is treatable and 20% is preventable. 25 million people are blind in Europe, 12 million in America, 9 million in India, 6 million in China, and 7 million in Africa. In Turkey, this number is approximately 300 thousand. Based on these statistics, the main aim of this article is to appeal to hundreds of thousands of people, to help them fulfill their daily activities, even partially, to improve the quality of life of these visually impaired people and to restore their health. In addition, the scientific and technical studies to be carried out on this subject will contribute to the enrichment of the literature on the subject and will also be beneficial for scientific and technical progress. When the current studies on bionic eyes are examined, there is no other system in the literature that obtains results using a mobile phone camera and software. The bionic eye stated in this article will be a first in this respect. [1-29]
- Research Article
- 10.70107/collectjroboticsandai-art0038
- Jul 25, 2024
- Collective Journal of Robotics and AI
This article introduces the “Mobile Bio-Eye-Tronic System” which is an artificial vision system for the impaired (blind) people and also low vision (sight loss) people. “Mobile Bio-Eye-Tronic System” is a completely original project and unique to the author of this article. There are 45 million visually impaired (blind) people in the world and 135 million low vision (sight loss) people. 60% of blindness in the world is treatable and 20% is preventable. 25 million people are blind in Europe, 12 million in America, 9 million in India, 6 million in China, and 7 million in Africa. In Turkey, this number is approximately 300 thousand. Based on these statistics, the main aim of this article is to appeal to hundreds of thousands of people, to help them fulfill their daily activities, even partially, to improve the quality of life of these visually impaired people and to restore their health. In addition, the scientific and technical studies to be carried out on this subject will contribute to the enrichment of the literature on the subject and will also be beneficial for scientific and technical progress. When the current studies on bionic eyes are examined, there is no other system in the literature that obtains results using a mobile phone camera and software. The bionic eye stated in this article will be a first in this respect.
- Research Article
2
- 10.1145/3178412.3178421
- Jan 9, 2018
- ACM SIGACCESS Accessibility and Computing
Low vision is a visual impairment that cannot be corrected with eyeglasses or contact lenses. Low vision people have functional vision and prefer using that vision instead of relying on audition and touch. Existing approaches to low vision accessibility enhance people's vision using simple "signal-to-signal" techniques that do not take into account the user's context. There is thus a major gap between low vision people's needs and existing low vision technologies. My doctorial research aims to address this gap by augmenting low vision people's visual experience with direct and optimal visual feedback based on the user's context. I will design and study novel methods for visual augmentation , which involves visual feedback beyond simple enhancements. My research considers two dimensions: visual condition and task. By understanding the visual perception of people with different visual abilities and exploring their needs in different visual tasks, I will design applications with visual feedback that is optimal for specific context to maximize people's access to information. My research will yield design insights and novel applications for people with all visual abilities.
- Conference Article
3
- 10.17210/hcik.2016.01.198
- Jan 27, 2016
Low vision people who are more than 88% of visually impaired people want to use their residual vision and don't want to look like disabled. However, many assistive devices for low vision are suitable for use indoors and people with disabled are exposed using assistive device so that they are reluctant to use that. So, many low vision people want to use smart phone to solve problem but now functions of smartphone are not enough. In this study, we want to suggest smart assistive software and device for low vision people to use with residual vision as much as possible without being self-conscious. For that, we interviewed expert of low vision and low vision people with qualitative research methods. Based on the results, we present solution and suggest EYESEE, assistive device and application for low vision people.
- Research Article
- 10.14571/brajets.v10.n4.275-287
- Dec 29, 2017
The blind or low vision people should exercise, in equal conditions, the rights and duties, which assure them the citizenship. To do this, they can use differentiated resources, that facilitate or promote the development of functional skills and, consequently, the social inclusion, between which stand out the Assistive Technologies (TA). Thus, this article presents the results of a literature review from the years 2007 to 2015, which aimed to analyze which are the TA that can be used by blind or low vision people in the social context, and how they influence in the inclusion. The methodology used was a bibliographical and documentary research about TA, blindness and low vision, realized in the virtual library Scielo and in the Portal Periodicos CAPES. It was selected brazilian articles that mentioned the theme, which were, later, analyzed and categorized, according common elements perceived, and served as a basis for the considerations and discussions presented here. It was mainly perceived, that the resources in question are used in different social contexts and with varied purposes, and that they allow greater independence for the users. The final considerations pointed to the importance of the TA study related withblindness or low vision people, to the awareness of society against prejudice, and to the implementation of public policies to support the use of these resources with the public in question.
- Research Article
1
- 10.1167/jov.23.15.18
- Dec 1, 2023
- Journal of vision
Low vision is a visual impairment that falls short of blindness but cannot be corrected by eyeglasses or contact lenses. While current low vision aids (e.g., magnifier, CCTV) support basic vision enhancements, such as magnification and contrast enhancement, these enhancements often arbitrarily alter a user's full field of view without considering the user's context, such as their visual abilities, tasks, and environmental factors. As a result, these low vision aids are not sufficient or preferred by low vision users in many important tasks. Augmented reality (AR) technology presents a unique opportunity to enhance low vision people's visual experience by automatically recognizing the surrounding environment and presenting tailored visual augmentations. In this talk, I will talk about how we design and build intelligent AR systems to support low vision people in visual tasks, such as a head-mounted AR system that presents visual cues to orient users' attention in a visual search task, as well as a projection-based AR system that projects visual highlights on the stair edges to support safe stair navigation. I will conclude my talk by discussing our future research direction on AR for low vision accessibility.
- Conference Article
82
- 10.1145/2971648.2971723
- Sep 12, 2016
Visual impairments encompass a range of visual abilities. People with low vision have functional vision and thus their experiences are likely to be different from people with no vision. We sought to answer two research questions: (1) what challenges do low vision people face when performing daily activities and (2) what aids (high- and low-tech) do low vision people use to alleviate these challenges? Our goal was to reveal gaps in current technologies that can be addressed by the UbiComp community. Using contextual inquiry, we observed 11 low vision people perform a wayfinding and shopping task in an unfamiliar environment. The task involved wayfinding and searching and purchasing a product. We found that, although there are low vision aids on the market, participants mostly used their smartphones, despite interface accessibility challenges. While smartphones helped them outdoors, participants were overwhelmed and frustrated when shopping in a store. We discuss the inadequacies of existing aids and highlight the need for systems that enhance visual information, rather than convert it to audio or tactile.
- Conference Article
103
- 10.1145/3173574.3174203
- Apr 21, 2018
Current low-tech Orientation & Mobility (O&M) tools for visually impaired people, e.g. tactile maps, possess limitations. Interactive accessible maps have been developed to overcome these. However, most of them are limited to exploration of existing maps, and have remained in laboratories. Using a participatory design approach, we have worked closely with 15 visually impaired students and 3 O&M instructors over 6 months. We iteratively designed and developed an augmented reality map destined at use in O&M classes in special education centers. This prototype combines projection, audio output and use of tactile tokens, and thus allows both map exploration and construction by low vision and blind people. Our user study demonstrated that all students were able to successfully use the prototype, and showed a high user satisfaction. A second phase with 22 international special education teachers allowed us to gain more qualitative insights. This work shows that augmented reality has potential for improving the access to education for visually impaired people.
- Conference Article
5
- 10.1109/icsmc.2010.5642352
- Oct 1, 2010
Motivated from the result of our field work study that low vision people are hardly noticing public signs on the streets and in the interior of buildings even under clear weather, in this report we continue our research on application of eye tracking technology for low vision aids. We start from a short characterization of low vision from viewing standpoint and show that low vision person can basically recognize target object by his residual sight on his mobile display if we send an enlarged clear vision of the target. Taking advantage of this possible enhancement of low vision we explore eye tracking technology for helping him with navigation during this walking time. We show that classical scanpath technique for localizing regions of interest (ROIs) is applicable with low vision as well. Then we proceed to examine possible enhancement of low vision by (1) segmenting out public signs from his ROI, and (2) sending its enhanced vision back to his mobile monitor. We also show a preliminary result of public sign recognition in the view by using a fast pattern matching technique called “boosting,” liking to a future system of vision navigator for guiding the gaze of low vision to a missing public sign and zooming into it. Optimization of classifier programs is discussed from decision tree standpoint separately.