Abstract

Depth, colour, and thermal images contain practical and actionable information for the blind. Conveying this information through alternative modalities such as audition creates new interaction possibilities for users as well as opportunities to study neuroplasticity. The ‘SoundSight’ App (www.SoundSight.co.uk) is a smartphone platform that allows 3D position, colour, and thermal information to directly control thousands of high-quality sounds in real-time to create completely unique and responsive soundscapes for the user. Users can select the specific sensor input and style of auditory output, which can be based on anything—tones, rainfall, speech, instruments, or even full musical tracks. Appropriate default settings for image-sonification are given by designers, but users still have a fine degree of control over the timing and selection of these sounds. Through utilising smartphone technology with a novel approach to sonification, the SoundSight App provides a cheap, widely accessible, scalable, and flexible sensory tool. In this paper we discuss common problems encountered with assistive sensory tools reaching long-term adoption, how our device seeks to address these problems, its theoretical background, its technical implementation, and finally we showcase both initial user experiences and a range of use case scenarios for scientists, artists, and the blind community.

Highlights

  • Introduction and background1.1 Basic overview of SSDs and issues ‘Sensory substitution devices’ can continuously and systematically convert information normally associated with one sense into those of another [1]

  • The SoundSight takes a unique approach in sensory substitution because it delivers on the aspects requested by end users but leaves the exact style of sonification open-ended to the wider community

  • The complexity faced by the end user can be scaled, for instance, by only sonifying certain colours, temperatures, distances, or even just one pixel at a time, this provides users with the steppingstones from simple to advanced sensory substitution

Read more

Summary

Introduction and background

1.1 Basic overview of SSDs and issues ‘Sensory substitution devices’ (or SSDs) can continuously and systematically convert information normally associated with one sense (e.g. vision) into those of another (e.g. hearing, or touch) [1]. The vOICe ‘snapshots’ a single image and sonifies this over one second, scanning left-to-right through the image by sonifying each column in turn and converting the pixel information into a mix of pitch (denoting verticality), loudness (denoting brightness), and panning (denoting laterality) over-time, and repeating this process for the snapshotted image. Beyond the vOICe, this spectrogram-like approach of using pitch-height, panning-laterality, and loudness-brightness has underpinned many other SSD design choices with minor variations, such as presenting the information all-atonce (PSVA [19]; The Vibe [20]), musically (SmartSight [21]), or adding colour information through timbre (EyeMusic [22]) These design choices have been made upfront by their creators with a general lack of opportunity for users to modify the principles of sonification in light of their own experience or interests

Lack of adoption
Hardware design
Utility
Situational factors and wider context
SoundSight: a mobile SSD
SoundSight architecture
Creating the audio array
Auditory properties
Auditory perception
Communicating distance
Naturalistic approaches: inspiration from blind individuals
The sonification controller
Heartbeat and sound onset
Complex colours and sound selection
Sensors
Structure sensor
FLIR one sensor
Settings and user control
Gestural controls
Preliminary end user testing
Sensory substitution and augmentation
Research on human perception
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call