Abstract

Music offers a uniquely abstract way for the expression of human emotions and moods, wherein melodic harmony is achieved through a succinct blend of pitch, rhythm, tempo, texture, and other sonic qualities. The emerging field of “Robotic Musicianship” focuses on developing machine intelligence, in terms of algorithms and cognitive models, to capture the underlying principles of musical perception, composition, and performance. The capability of new-generation robots to manifest music in a human-like artistically expressive manner lies at the intersection of engineering, computers, music, and psychology; promising to offer new forms of creativity, sharing, and interpreting musical impulses. This manuscript explores how real-time collaborations between humans and machines might be achieved by the integration of technological and mathematical models from Synchronization and Learning, with precise configuration for the seamless generation of melody in tandem, towards the vision of human–robot symphonic orchestra. To explicitly capture the key ingredients of a good symphony—synchronization and anticipation—this work discusses a possible approach based on the joint strategy of: (i) Mapping— wherein mathematical models for oscillator coupling like Kuramoto could be used for establishing and maintaining synchronization, and (ii) Modelling—employing modern deep learning predictive models like Neural Network architectures to anticipate (or predict) future state changes in the sequence of music generation and pre-empt transitions in the coupled oscillator sequence. It is hoped that this discussion will foster new insights and research for better “real-time synchronized human-computer collaborative interfaces and interactions”.

Highlights

  • Music has enraptured humans for ages and offers an unbridled avenue for the expression of emotions, intellect, passions and moods, for us and for other creatures of nature

  • Conclusion and future work In a nutshell, this manuscript proposes a potential architecture for creating human–robot ensemble-based musical performances

  • We discuss the possibility of integrating mathematical modelling and machine-learning models to efficiently tackle the pivotal challenge of human–robot synchronization in real-time for a musical ensemble

Read more

Summary

Introduction

Music has enraptured humans for ages and offers an unbridled avenue for the expression of emotions, intellect, passions and moods, for us and for other creatures of nature. The MidiScore of the different instruments can be used to calculate the ‘leadership index’ of each instrument at any given point of time, based on its dominance Such deep learning ensemble models can benefit from the introduction of additional features like the leadership index and phase identification from mapping module, to learn time seriesbased relationships between chord progression and leader transition. By providing a healthy prediction of the synchronized state characteristics (like beats per minutes), the modelling module can reduce the latency in Kuramoto oscillator convergence in the mapping phase To this end, a deep learningbased regression model (like LSTM) can be trained to predict sonic feature values using the identified leader and chord recognition the musical notation sheet as features. The functioning of Cyborg Philharmonic can be outlined as—(i) Leader Detection, (ii) Beat Prediction, (iii) Music Synchronization—showcasing a unique fusion between traditional mathematical models and recent AI predictive techniques for achieving a synchronous human–machine musical orchestra performance

Discussion
Conclusion and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call