Design considerations for photosensitivity warnings in visual media
When digital content is tested for photosensitive safety and is found to contain seizure-inducing strobes or flashing lights, warnings about photosensitive risk are usually shown to the user prior to viewing the content. These photosensitivity warnings are an important accessibility feature for people with photosensitive epilepsy, allowing them to avoid interacting with content that may trigger seizures. However, little is known about how these warnings should be structured to maximize effectiveness in helping with people PSE navigate visual media safely. The design space for photosensitivity warnings is vast and includes questions such as what details to include about strobing light sequences or the content itself, where to place warnings within an interface, and what methods to use to extract information about the strobing light sequences (e.g., crowdsourced or automated methods). In this work, we contribute a thematic analysis of crowdsourced warnings drawn from the DoesTheDogDie online forum and an interview study with five people who have been diagnosed with photosensitive epilepsy about design considerations for photosensitivity warnings on digital platforms. To guide our interviews, we assembled examples of both crowdsourced and automated warnings about seizure-inducing content in films. Automated warnings were presented in the form of a high fidelity sketch demonstrating what an automated system for photosensitivity warnings might look like when deployed by a film streaming platform. We contribute design suggestions for the structure, content, and data sourcing of photosensitivity warnings for visual media based on the findings of our interviews. The results of this work will enable more effective and informative photosensitivity warnings across all forms of digital visual media.
- Book Chapter
1
- 10.1007/978-3-030-34058-2_18
- Jan 1, 2019
Our active lifestyles see us playing, pausing, and skipping through life all the while our phones are in our hands. For many, completing the daily grind requires regular audio and visual media accompaniment and for this we interact with our phones as we skip, run, and jump. In this respect, a unique form of digital library is our mobile media player of choice. These media players serve as both the interface for listening to and watching these audio and visual media as well as the media library and storage. We argue therefore that the interface design considerations of the media library as well as the media interaction require user centered investigation. We tested button placement variations and analyzed the user preferences as well as user interaction with these mobile media player prototypes while on the move. Early insights suggest users prefer what they are most accustomed to, yet issues of accuracy with interface designs that are unfamiliar require further investigation.
- Conference Article
- 10.1109/ssi52265.2021.9466961
- Apr 27, 2021
The Tactile Internet, which is considered by many to be the next generation of Internet of Things (IoT), will enable real time Human Computer Interaction (HCI) systems capable of delivering tactile experiences remotely from the machine to the operator. Tactile Internet application fields include the tactile robot teleoperation, which constitutes the next generation of collaborative robots, equipped with sensing capabilities to process humanlike tactile sensation in Augmented/Virtual Reality (AR/VR) applications, i.e. advanced AR/VR training or education environments, Automotive and other application domains where Human Machine Interfaces (HMI) are required [1]. Tactile Enabled battery powered HCI devices must satisfy ultra-low latency haptic media constraints which are an order of magnitude more sensitive to delays when compared to audio and visual media [2] as well as low power consumption constrains required by battery powered portable or wearable technology. This paper describes the design considerations for power efficient low latency tactile feedback technology and the modelling and characterization of the system level latency associated with a tactile piezoelectric actuator driver. Such a driver architecture is envisaged to be used to implement haptic feedback in HMI scenarios, with a focus on reducing the latency of, battery powered, piezoelectric based tactile enabled HCI devices.
- Video Transcripts
- 10.48448/8ycs-n784
- Nov 25, 2020
- Underline Science Inc.
In this talk, we present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media. The goal of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in textual content to enable automated design assistance in authoring. The main focus is on short text instances for social media, with a variety of examples, from social media posts to inspirational quotes. Participants were asked to model emphasis using plain text with no additional context from the user or other design considerations. SemEval-2020 Emphasis Selection shared task attracted 197 participants in the early phase and a total of 31 teams made submissions to this task. The highest-ranked submission achieved 0.823 Matchm score. The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used, and part of speech tag (POS) was the most useful feature.
- Research Article
1
- 10.1177/154193120805200609
- Sep 1, 2008
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Presentations using a mixture of media can keep observers interest and increase the likelihood that people will retain information. When verbal and visual media are used together, offloading text for narration in the presence of visual materials often improves learning. However, recent research has pointed to the fact that this effect might be dependent on constraining the pace of presentation and that the effect is reduced when time to study static visual materials is increased. The present experiment extends this research to animated visual materials, by manipulating the verbal presentation modality and pace of presentation. There was a main effect of presentation pace and no main effect of verbal presentation modality. This lack of a modality effect was unexpected and possibly came about due to interactions with presentation pace. This suggests that designers need to consider the effects of verbal presentation modality and study time in tandem rather than as discrete design elements. This also points to the need to test other design combinations to guard against other unexpected or surprising relationships between design elements.
- Research Article
8
- 10.1162/leon_a_00410
- Aug 1, 2012
- Leonardo
This paper documents explorations into an alternative platform for immersive and affective expression within spatial mixed reality installation experiences. It discusses and analyzes experiments that use an advanced LED cube to create immersive, interactive installations and environments where visitors and visuals share a common physical space. As a visual medium, the LED cube has very specific properties and affordances, and optimizing the potential for such systems to create meaningful experiences presents many interlinked challenges. Two artworks exploring these possibilities are discussed. Both have been exhibited internationally in a variety of settings. Together with this paper, the works shed some light on the design considerations and experiential possibilities afforded by LED cubes and arrays. They also suggest that LED grids have potential as an emerging medium for immersive volumetric visualizations that occupy physical space.
- Research Article
10
- 10.1145/3484506
- Mar 4, 2022
- ACM Transactions on Interactive Intelligent Systems
Videos are well-received methods for storytellers to communicate various narratives. To further engage viewers, we introduce a novel visual medium where data visualizations are embedded into videos to present data insights. However, creating such data-driven videos requires professional video editing skills, data visualization knowledge, and even design talents. To ease the difficulty, we propose an optimization method and develop SmartShots, which facilitates the automatic integration of in-video visualizations. For its development, we first collaborated with experts from different backgrounds, including information visualization, design, and video production. Our discussions led to a design space that summarizes crucial design considerations along three dimensions: visualization, embedded layout, and rhythm. Based on that, we formulated an optimization problem that aims to address two challenges: (1) embedding visualizations while considering both contextual relevance and aesthetic principles and (2) generating videos by assembling multi-media materials. We show how SmartShots solves this optimization problem and demonstrate its usage in three cases. Finally, we report the results of semi-structured interviews with experts and amateur users on the usability of SmartShots.
- Conference Article
8
- 10.1145/2341931.2341937
- Aug 5, 2012
This paper documents explorations into an alternative platform for immersive and affective expression within spatial mixed reality installation experiences. It discusses and analyzes experiments that use an advanced LED cube to create immersive, interactive installations and environments where visitors and visuals share a common physical space. As a visual medium, the LED cube has very specific properties and affordances, and optimizing the potential for such systems to create meaningful experiences presents many interlinked challenges. Two artworks exploring these possibilities are discussed. Both have been exhibited internationally in a variety of settings. Together with this paper, the works shed some light on the design considerations and experiential possibilities afforded by LED cubes and arrays. They also suggest that LED grids have potential as an emerging medium for immersive volumetric visualizations that occupy physical space.
- Conference Article
- 10.1145/3210825.3213551
- Jun 25, 2018
The overarching goal of this workshop is to bring together practices and research in medical fields with media contents developers, designers, and UI/UX/QoE researchers, as well as hospital practitioners as ophthalmologists and psychiatrists. Discussions will relate to:how new visual experience and media contents can improve the in-hospital experience of in-patients, caregivers, and medical staffs;how better understanding of visual impairments (e.g. for elderly) can help to design more inclusive TV experience. The starting point will be the cases and experiences of Care TVX, followed by a multidisciplinary discussion on challenges and design considerations to adjusted or new hospital and care practices, led by workshop participants. The outcome of the workshop will be a collection of best practices in the form of position papers and online content.
- Research Article
9
- 10.5204/mcj.1251
- Aug 16, 2017
- M/C Journal
The #AustralianBeachspace Project: Examining Opportunities for Research Dissemination Using Instagram
- Conference Article
- 10.1145/2733373.2807416
- Oct 13, 2015
In this tutorial, we will teach how to use VM Hub (Visual Media Hub), an open multimedia hub with most of the code in the open source space, to convert a multimedia application to a cloud service, and to build mobile applications that consumes the cloud service. The tutorial also covers the architecture and design consideration of VM Hub.
- Single Report
8
- 10.21236/ada096234
- Jan 1, 1981
: This report presents relationships between aircrew training device (ATD) instructional support features and training requirements. Instructional support features include ATD hardware and software capabilities that permit instructors to manipulate, supplement or otherwise control student learning experiences. The instructional features addressed are freeze; automated demonstrations; record and replay; automated cuing and coaching; manual and programmable sets of initializing conditions; manual and programmable malfunction control; atd-mounted audio visual media; automated performance measurement; automated performance alerts; annunciator and repeater instruments; closed circuit television; automated adaptive training; programmed mission scenarios; automated controllers; graphic and text readouts of controller information; computer controlled threats; computer managed instruction; recorded briefings; debriefing aids; and hardcopy printouts. Each feature is discussed, as appropriate, in terms of its operation, related features, instructional values, observed applications, utility (use-related) information, related research, and design considerations. (Author)
- Research Article
40
- 10.3991/ijim.v13i12.11560
- Dec 18, 2019
- International Journal of Interactive Mobile Technologies (iJIM)
<p class="0abstract"><span lang="EN-GB">How may we best utilize mobile augmented reality for storytelling when reconstructing historical events onlocation? In this article we present a series of narrative design considerations when developing an augmented reality application recreating the assault on Omaha Beach in the early morning on D-Day. To what extent may we select existing genre conventions from, for example, documentary film, and adapt them to a location–based audio–visual medium like AR? How can we best combine sequence and access, the narrative flow of an unfolding historical event with the availability of background information, in order to enrich the experience of the story, but without distorting its coherence? To what extent may we draw from existing and well known media representations of the Omaha Beach landing? How was the battle documented with contemporary means? We present the rich documentation of photos, films, drawings, paintings, maps, action reports, official reports, etc., and discuss how these have been employed to create the published AR situated simulation. We also describe and discuss the testing and evaluation of the application on location with visitors, as well as online tracking of its current use.</span></p>