Accessibility in Pure Data for visually impaired musicians: a real-time soundscape strategy

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Background: Pure Data (Pd) is a predominantly visual programming language widely used by musicians in the process of composition and musical performance that involves sound synthesis. This visual characteristic implies poor accessibility for a visually impaired musician (VIM) because screen readers do not allow recognition and/or manipulation of language elements. Objectives: This work aims to present an inclusive strategy that enables VIM to interact with other musicians in real-time during performances. Methods: A sighted musician programmed a Pd script consisting of five patches. Switches and potentiometers of a MIDI controller matched elements of the script, activating andmodulating parameters of sound clips. The VIM uses the screen reader to access Pd, open the programmed code, configure the MIDI controller, and enable audio output. The performance is executed by the VIM. Results: The result of the elaborate composition was a live soundscape performance that had similarity in the manipulation of the MIDI controller with that of the sighted musician. Conclusion: These strategies enabled a VIM to perform autonomously in real time using Pd patches, fostering interaction with sighted musicians and expanding accessibility.

Similar Papers
  • Conference Article
  • 10.54941/ahfe1006068
Construction of a VR music live performance system to support improved sense of presence
  • Jan 1, 2025
  • Shogo Yanaze + 2 more

Entertainment such as live music performances (live performances) and events are often held in urban centers for reasons such as transportation accessibility and venue size, resulting in regional disparities in entertainment. On the other hand, recent years have seen the emergence of virtual reality (VR)-based live performances, which could solve regional disparities in entertainment. VR live performances can provide many opportunities for people who are unable to attend live performances due to location and time constraints.However, the quality of the experience is expected to deteriorate due to the sense of realism and other sensations that are difficult to reproduce in Reality live performances. However, few studies have compared real-life live performances with VR live performances and examined the effects of factors such as the sense of presence on the audience. In this study, an experiment was conducted with the aim of constructing a system to support the improvement of the sense of presence in VR live music performances. In the experiment, we experienced an on-demand VR live performance and investigated the degree of realism and the differences from a real live performance. From the results, we analyzed the factors that are important for improving the realism of VR live performances. The system is then constructed and evaluated.Ten male participants (23.6±1.8 years old) who had participated in a live concert in the past and remembered the sensation were asked to experience a VR music concert in a standing position for 5 minutes, and a post-experience questionnaire was administered to compare the realism of the actual concert and the VR concert.The post-experience questionnaire consisted of a subjective questionnaire and open-ended questions regarding the sense of presence and the five senses. The subjective questionnaire used a 5-point Likert scale. The items related to the sense of presence consisted of 13 items, which were added by the researcher based on previous studies that have analyzed the sense of presence. 3 was used as the standard for the 5-point scale, based on the comparison with previous real-live experiences. The higher the value, the higher the evaluation of each element in the VR experience.According to the results of the post-experience questionnaire, the vibration and communication elements were rated low. The VR live experience and its components in this experiment did not reproduce the vibrations from the large speakers and communication with other people, such as the performers and the surrounding audience, that are felt in a real live performance. Therefore, the experience tended to feel like a one-way live performance, and the quality of the experience was considered to be compromised. In addition, factor analysis of the questionnaire results showed that among the body perception factors with the highest contribution rate, the values of dynamism, vibration, and sense of self-presence were the most significant, and thus had a large impact on the sense of presence in the live performance.

  • Research Article
  • 10.1016/s1045-926x(00)90175-7
Building Environments for Visual Programming of Robots by Demonstration
  • Oct 1, 2000
  • Journal of Visual Languages & Computing
  • P Cox

Building Environments for Visual Programming of Robots by Demonstration

  • Research Article
  • Cite Count Icon 10
  • 10.1006/jvlc.2000.0175
Building Environments for Visual Programming of Robots by Demonstration
  • Oct 1, 2000
  • Journal of Visual Languages and Computing
  • Philip T Cox + 1 more

Building Environments for Visual Programming of Robots by Demonstration

  • Book Chapter
  • Cite Count Icon 2
  • 10.11647/obp.0353.17
17. Audiences of the Future
  • Jan 30, 2024
  • Michelle Phillips + 1 more

The COVID-19 pandemic introduced audiences to new ways of engaging with artistic performance in an online environment (Rendell, 2020, terms this ‘pandemic media’). Multiple performers and organisations transferred live performances into a recorded or livestreamed format. However, at present, there is little research to support decisions that organisations may make in terms of how they do this, and what they deem to be important in how they record and / or stream. There is evidence to support the value of ‘liveness’ in music performance (Tsangaris, 2020), but what is this, and can it be replicated in online environment? This chapter will outline existing research regarding concepts such as liveness in music performance. The study discussed in the chapter will also discuss research regarding the live music experience as a social one, and the vital role that sharing musical spaces plays in social bonding and group coherence. This study examines questions including what listeners perceive to be the main differences between live and livestreamed attendance at music performance, and what constitutes ‘liveness’ in such performances. Data analysis suggests that audiences may have different motivations to attend live versus livestreamed performances, with the former being associated with having fun and a good night out, and shared experience, and the latter often about using time in a meaningful way and the sound quality available in livestreamed attendance at an event. ‘Liveness’ involves not only such factors as the opportunity to share an experience and interact with other audience members and performers, but also the sense of atmosphere, immersion, sensory experiences, and being physically present. When asked about the advantages and disadvantages of attending a livestreamed performance, audience members cite factors common to both live and online experiences such as the logistics, and whether they are with other people or not. However, a thematic analysis also reveals differences in what people see as the advantages and disadvantages of attending online, such as the emotional response to a live performance, and considerations around accessibility and the impact on the environment for online experiences. There is an urgent need in the music industry to better understand what the essential elements of a live performance are, and whether these aspects need to be, and indeed can be replicated in a livestreamed event, for example in terms of level of sound quality and emotional response.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/09298219308570634
Real‐time concurrent pascal music
  • Aug 1, 1993
  • Interface
  • Leonello Tarabella

Real‐Time Concurrent PascalMusic is a musical language which allows algorithmic composition and interaction during live performances. It consists of the standard Pascal language extended with a set of primitives which can manage musical and acoustic events. RTCPM is based on the idea that a running program/composition issues MIDI messages which feed synthesizers, and that the overall musical result depends on the computation of predefined algorithms which are also controlled in real time by incoming data that a performer issues using MIDI controllers. The concurrency feature which allows many processes to be activated at the same time is the vital mechanism that makes PascalMusic work.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/vl.1993.269597
A visual logic programming language based on sets and partitioning constraints
  • Aug 24, 1993
  • L Spratt + 1 more

This paper presents a new programming language named SPARCL that has four major elements: it is a visual language, it is a logic programming language, it relies on sets to organize data, and it supports partitioning constraints on the contents of sets. It is a visual programming language in that the representation of the language depends extensively on non-textual graphics and the programming process relies on graphical manipulation of this representation. It is a logic programming language in that the underlying semantics of the language is the resolution of clauses of a Horn-like subset of first order predicate logic. It uses sets as the only method of combining terms to build complex terms. Finally, one may constrain a set's structure by specifying a partitioning into pairwise disjoint subsets.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

  • Conference Article
  • 10.1145/157709.157787
Visual programming languages from an object-oriented perspective (abstract)
  • Jan 1, 1992
  • Allen L Ambler + 1 more

Visual programming language research has evolved greatly since its early days. At first, attempts at visual programming mostly took the form of flowchart-like diagrams. But in recent years, a wide number of innovative approaches have been incorporated into visual languages, including object-oriented programming, form-based programming, programming by demonstration, and dataflow programming. Unfortunately, while many of these systems represent important ideas, only a few have been successful as complete visual programming languages. This tutorial explains why this is true, and describes ways in which the problem can be addressed.This tutorial explores the issues behind the successes and failures of earlier approaches from a design perspective. It identifies characteristics of successful visual programming languages, and explains how to design an object-oriented language that maintains those characteristics. It shows solutions to a number of problems by looking at existing visual programming languages, including Prograph.

  • Research Article
  • 10.1145/157710.157787
Visual programming languages from an object-oriented perspective (abstract)
  • Dec 1, 1992
  • ACM SIGPLAN OOPS Messenger
  • Allen L Ambler + 1 more

Visual programming language research has evolved greatly since its early days. At first, attempts at visual programming mostly took the form of flowchart-like diagrams. But in recent years, a wide number of innovative approaches have been incorporated into visual languages, including object-oriented programming, form-based programming, programming by demonstration, and dataflow programming. Unfortunately, while many of these systems represent important ideas, only a few have been successful as complete visual programming languages. This tutorial explains why this is true, and describes ways in which the problem can be addressed. This tutorial explores the issues behind the successes and failures of earlier approaches from a design perspective. It identifies characteristics of successful visual programming languages, and explains how to design an object-oriented language that maintains those characteristics. It shows solutions to a number of problems by looking at existing visual programming languages, including Prograph.

  • Research Article
  • Cite Count Icon 67
  • 10.2307/3680894
An Interactive MIDI Accompanist
  • Jan 1, 1998
  • Computer Music Journal
  • Petri Toiviainen

The ability to infer beat and meter from music is one of the basic activities of musical cognition. After hearing only a short fraction of music, we are able to develop a sense of beat and to tap our foot along with the music. Even if the music is rhythmically complex, containing a range of different time values and possibly syncopation as well, we are capable of inferring the different periodicities present in the music and synchronizing to them. Simulating this activity with a computer program might seem, at first glance, to be simple. If a note onset (that is, an attack) occurs before the system expects it to occur, the estimated tempo is increased, and vice versa. In practice, however, it is a more demanding task. Besides rhythmic complexity, the computer must be capable of dealing with temporal deviations present in every musical performance. Such deviations can either be intentional (i.e., manipulations of timing intended to attract listeners' attention), or unintentional (i.e., inaccuracies in timing caused by the technical inabilities of the performer). Furthermore, the temporal deviations can be either short or long term. In other words, they can either consist of timing inaccuracies of a single or a few notes, or result in a permanent change of tempo. An ideal beattracker program should be capable of adapting to temporal deviations of both short and long terms. Models that attempt to address the foot-tapping problem (to rhythmically parse musical performances in real time) have been proposed by Dannenberg (1984), Vercoe and Puckette (1985), Dannenberg and Mont-Reynaud (1987), Allen and Dannenberg (1990), Baird, Blevins, and Zahler (1993), and Vantomme (1995). The systems of Dannenberg (1984), Vercoe and Puckette (1985), Baird, Blevins, and Zahler (1993), and Vantomme (1995) are intended to follow a live performance of a predefined score; that is, the systems are provided with a priori knowledge about the input. They rely on both pitch and note-onset data extracted from the musical performance, and try to match these with the score. The implementations of Dannenberg and MontReynaud (1987) and Allen and Dannenberg (1990) do not need any a priori information about the musical performances they follow. The approach of Dannenberg and Mont-Reynaud (1987) is based on control theory; it uses a weighted average of previous perceived tempos to compute the current tempo. Allen and Dannenberg (1990) employ a system that considers several alternative interpretations using the technique of beam search. Each interpretation consists of an estimated tempo and an estimated beat phase. An important contribution to beat tracking was the introduction of adaptive oscillators by Large and Kolen (1994). These oscillators attempt to adjust their phase and period, so that during stimulation their output pulses become, and remain, phaseand frequency-locked to the stimulus. The stimulus consists of a series of impulses, each of which corresponds to the onset of an individual note. This article describes an interactive accompanying system that works in a MIDI environment. It tracks the tempo of a musical performance in real time, and plays back a predefined accompaniment in synchrony with the performance. Except for an estimate of the initial tempo, it does not need any a priori information about the musical performance. Therefore, it is also capable of following improvisations. The system is based on an adaptive oscillator similar to the one introduced by Large and Kolen (1994). The present oscillator, however, employs a more sophisticated adaptation mechanism than Computer Music Journal, 22:4, pp. 63-75, Winter 1998 ? 1999 Massachusetts Institute of Technology.

  • Book Chapter
  • Cite Count Icon 17
  • 10.1007/978-1-4613-1805-7_1
Introduction: Visual Languages and Iconic Languages
  • Jan 1, 1986
  • Shi-Kuo Chang

The term “visual language” means different things to different people. To some, it means the objects handled by the language are visual. To others, it means the language itself is visual. To the first group of people, “visual language” means “language for processing visual information,” or “visual information processing language.” To the second group of people, “visual language” means “language for programming with visual expressions,” or “visual programming language.”

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/tale52509.2021.9678608
Comparison Experiment of Learning State Between Visual Programming Language and Text Programming Language
  • Dec 5, 2021
  • Katsuyuki Umezawa + 5 more

Recently, visual programming languages such as Scratch have been popular among novice programmers. Afterward, they employ text-based programming languages such as C and Java. Nevertheless, there are significant barriers between visual and text-based languages. Thus, it is important to establish a seamless transition from visual to text-based languages. In this study, we clarify the difference in the learning process between visual language and text-based language by measuring brain waves. Specifically, experiments will be conducted to solve problems with various difficulty levels for learning visual and text-based languages. The brain waves will be measured, and the values of β/α will be evaluated. Results show that the values of β/α when solving difficult problems increased in the text-based language, but not in the visual language. This suggests that beginners may be thinking differently in the learning process of visual and text-based languages.

  • Research Article
  • 10.1525/jsmg.2022.3.2-3.154
Review: Explosions in the Mind: Composing Psychedelic Sounds and Visualisations, by Jonathan Weinel
  • Jul 1, 2022
  • Journal of Sound and Music in Games
  • George Reid

Review: <i>Explosions in the Mind: Composing Psychedelic Sounds and Visualisations</i>, by Jonathan Weinel

  • Research Article
  • 10.31937/ultimart.v17i2.3692
Analysis of Lyrics and Visuals of Good Faith Forever (2023) Live Music Performance
  • Dec 29, 2024
  • Ultimart: Jurnal Komunikasi Visual
  • Andrea Rebecca Gunawidjaja + 1 more

As time and technology advances, visualization in music becomes an important role to support the music performance. Good Faith Forever is a live music performance released in 2021, created and performed by Hugo Pierre Leclercq, and is one of the examples of a unique live music performance, by making a narrative with a different perspective from the lyrics itself by blending the music and visuals together, making it an interesting subject to analyze. The objective of this research is to uncover the relationship between the lyrics and visuals of a live music performance of Good Faith Forever in 2023 as a study case. This research uses an interpretative qualitative approach using Roland Barthes' Semiotics Theory and Geoffrey Leech's Semantics Theory to analyze the visuals and lyrics. From the 2023 live performance of Good Faith Forever "The Prince” and "Neo Finale”, it is found that the lyrics and the performance tell the same stories but with different focuses, the lyrics itself focuses on the message of story, while the performance focuses on the narrative and characters living in the message of the lyrics. This implies the existence of creative freedom in interpreting the lyrics to form a similar story for the live performance. Keywords: good faith forever; live music performance; lyrics; music visualization; narratives

  • Research Article
  • 10.3389/fcomp.2025.1578595
Of altered instrumental relations: a practice-led inquiry into agency through musical performance with neural audio synthesis and violin
  • Oct 21, 2025
  • Frontiers in Computer Science
  • Halla Steinunn Stefánsdóttir + 1 more

Recent developments in artificial intelligence (AI) are rapidly generating new musical practices. While the use of generative AI in producing music is increasingly well known, intelligent algorithms are also being incorporated directly into musical instruments. Often based on small, personal artistic datasets, these systems augment computational agency in ways that alter the perception of the human performer and transform the performer–instrument relationship. Such developments raise questions about co-creativity, instrumental materiality, augmentation through code, and how musical expressivity and communication materialise in performance with AI. This article reports on research conducted through artistic experimentation and live performance. The project involved the design of an “intelligent violin” and proceeded in four phases: (1) curating datasets, (2) training a neural audio synthesis model, (3) working with the model in practice and live performance, and (4) analysing the artistic outcomes. Documentation and analysis of the artistic process provided the basis for identifying emergent creative and phenomenological relationships between performer and instrument. The findings reveal how algorithmic augmentation reshapes the agencies at play in performance and transforms both the affordances and the sociality of the creative encounter. The intelligent violin altered performer perception, shifting the dynamics of control, responsibility, and co-creativity. The research further documents how these processes affected musical expressivity and performer–instrument communication.

  • Conference Article
  • 10.1109/iisa.2015.7388015
Visual language as a mean of communication in the field of information technology
  • Jul 1, 2015
  • Alexandr V Moiseenko + 4 more

There is a challenge of developing computer programming visual language in the sphere of interdisciplinary research - cybernetics and philosophy, in particular, philosophy of mathematics, philosophy of language and new area of the philosophical analysis - ≪philosophy of information≫. We consider the aspect ≪language - thinking≫ and the stages of developing interaction between a person and information environment by means of programs. We set up the problem concerning the necessity to find the ways of constructing software by nonprofessionals. We offer the efficient way to eliminate the distinction in the system ≪interface-software programming language≫ via using visual programming language as a tool for people to communicate with machine. We designate the prospects of using visual programming languages and difficulties in their creation and implementation. We conclude that developing cloud technologies opens up new opportunities to make cross-subject tools of creating and changing software functionality by people without professional qualifications in programming. It is noted that developing and using visual languages can have considerable impact on the development of nano-bio-info-cogno technologies (NBIC) as well as on the research in the field of cognitive management, artificial intelligence as modeling visual thinking and other research areas. We recommend further understanding how to use visual programming languages in social, philosophical and psychological aspects.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.