Abstract

Due to the conventional “best-effort” approach that had no guarantee of packet delivery, the Internet was not initially developed for the purpose of sending real-time traffic. However, within the last 20 years and as part of the evolving trend of globalization along with distributed work processes, IP-based telecommunication has become a widely accepted and commonly used service. In that context a number of researchers have been investigating how far distributed communication on the Internet can be applied in terms of artistic music performances. Such a scenario exhibits signal delay boundaries tenfold less than the common video conferencing thresholds of 250 ms or more. Several successful results and actual implementations exist. However, apart from minor details, all of them share the same or at least similar approaches. In that context we established the fast-music research project in order to identify and develop novel approaches within this domain. In this paper we will present the final results of this project, which took place from 2016 to 2019. The target of fast-music was divided into five main goals: With respect to audio, we aimed for the development of a versatile streaming solution, the creation of a synchronizable standalone hardware and the installation of a server-based streaming solution. In terms of video, a latency-optimized capture/display component and an alternative IR-tracking based technology with 3D support was developed.

Highlights

  • Until the late 1960s, wide area networks (WAN) were commonly available in terms of voice telecommunication only [28]

  • Asynchronous data networks played no significant role in that context as they were mainly intended for data retrieval in company-owned local area networks (LAN)

  • On top of the natural path, the electronic path and the digital path, the Internet might add significant delays due to (1) detours introduced by the routing of packages, and (2) due to jitter buffers in order to compensate for delay variations

Read more

Summary

INTRODUCTION

Until the late 1960s, wide area networks (WAN) were commonly available in terms of voice telecommunication only [28]. In order to compensate for the effect of network jitter, a common approach is to apply a jitter buffer at the receiver’s end: By buffering a number of audio packets in the network queue the audio process can still provide solid playback in case of delayed packets. On top of the natural path, the electronic path and the digital path, the Internet might add significant delays due to (1) detours introduced by the routing of packages, and (2) due to jitter buffers in order to compensate for delay variations Based on these issues the main author Alexander Carôt developed the Soundjack software and released it in 2006. The main inspiration was a low-latency video processing system developed by Jeremy Cooperstock at McGill University [21]

PROBLEM AND GOALS
STREAMING ARCHITECTURE
Findings
SYNCHRONIZABLE AUDIO HARDWARE DEVICE
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.