Abstract

Object-audio workflows for traditional flat broadcasts have recently appeared after the introduction of new audio formats, such as MPEG-H and Atmos. These formats enable the creation of object-based mixes that can be dynamically rendered by the end user depending on their reproduction hardware. Until very recently, only post-produced content was being created for these formats, but new broadcast standards in the U.S. and Asia, as well as new hardware encoding engines, have made live sports production in these formats more feasible. These formats allow for a fuller, more immersive sound design and allow for some possibilities of personalization. The issue then arises on how to capture live action from the field that would provide these object-audio workflows with the desired isolated sounds and accompanying metadata. Current capture systems provide a suboptimal amount of isolation from the crowd to highlight individual action sounds and dialog from the field. Also, in most cases, placing traditional microphones near the action is not possible. In this paper, we present new microphone techniques and systems enabling better performance for sound capture that fulfill the needs of future object-audio broadcast formats. This includes beamforming techniques, automatic steering, and systems management of microphone arrays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call