Abstract
Recent advances in digital music instrument :ethnology have dramatically changed the way in which musicians create and produce music. With the digitally based music synthesizers available on the commercial market, it is possible to re-create virtually any sound with an unprecedented level of realism. At the Wright Laboratory (WL) Engineering Flight Simulation Facility, this technology has been borrowed from the professional music industry to produce an extremely realistic and low cost sound effects capability for piloted air-combat simulation. In addition to the decreased engineering cosls associated with using commercial off-the-shelf equipment, taking this particular approach to generating simulator sound cues has produced many other benefits. In earlier simulation sound systems, computers had to generate numerous discrete and analog signals to control sound generating hardware in real-time. Most of today's music synthesizers contain specialized digital signal processors that can generate extremely complex sounds in real-time. Many also use microprocessor front-ends to provide control over these sounds from an external host computer. In the context of flight simulators this distributed processing translates into less computational overhead for the simulation host computer. The advantages of external control in live performance have prompted the music industry to adopt a Musical Instrument Digital Interface (MIDI) standard1. Spearheaded by the International MIDI Association (IMA), MIDI has gained wide acccptance across the elcctronic musical instrument industry as the de-facto standard for performance control of electronic instruments. Being specialized for control of musical expression, it can afford the simulation programmer the same subtle control over the generation of sound effects as a musician has over the nuances of musical notes. And since it is a realtime control protocol, no appreciable transport delay would be incurred. im ember, AIAA 2d Lt Jeffrey M. Hebert Electrical Engineer WL/FIGD WPAFB, OH 45433 Choosing digital sampling synthesizers to generate cockpit sound effects has provided the benefit of added realism. Unlike other synthesizer architectures which generate complex sounds by modulatmg and combining simple periodic waveforms, samplers start with digitized recordings of real-world sounds. Sophisticated built-in software allows precise control over the shape and contour of the sound. Designing sounds this way is extremely easy and the results are often stunningly realistic. While each of these advantages are significant, the most important argument for using sampler based sound cueing systems is the realism of the result. Taken together with good visual and motion cues, a comprehensive sonic environment increases the ability to suspend disbelief in piloted air-combat simulations. This in turn helps yield more accurate simulation test results. This paper discusses in detail the development and implementation of a digital sampling synthesizer based sound effects system for piloted combat mission simulations at the WL Engineering Flight Simuladon Facility. Beginning with the initial simulation requirements and continuing through to results obtained in actual piloted combat simulations this paper will highlight the simplicity and flexibility of the systems as well as thc effectiveness this exploitation of commercial tcchnology has provided the Air Force. This paper is declared a work of the U.S. Government and is not subject to copyright protection in the United States.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have