The Auralization and Acoustics Laboratory




Russell Storms, Naval Postgraduate School (NPS)
Lloyd Biggs, NPS
William Cockayne, NPS
Paul Barham, NPS
John Falby, NPS
Don Brutzman, NPS
Michael Zyda, NPS



Abstract: As an expansion of the NPSNET Research Group (NRG), the Auralization and Acoustics Laboratory (AA-Lab) at the Naval Postgraduate School studies the integration of aural cues into virtual environments. Currently, the AA-Lab focuses on spatial-acoustic sound rendering via headphones (closed-field) and loudspeakers (open-field).


Mission

The AA-Lab mission is to study, through both research and education, how aural cues can increase one's sense of immersion in virtual worlds. Using the lab's facilities, staff, faculty and students explore various facets of the auditory channel. The AA-Lab introduces cross-modal (aural and visual) capabilities within the NRG and the Laboratory for Human Interaction in the Virtual Environment (HIVE) (Cockayne et al, 1996). We hope to provide the virtual environment research community with new capabilities and insights on the effective use of the auditory channel. For more details about the lab and its mission, visit the AA-Lab home page at http://www-npsnet.cs.nps.navy.mil/npsnet/aa-lab/


Current Research Directions

Research within the AA-Lab focuses on the ability of low-cost commercial sound equipment, with Musical Instrument Digital Interface (MIDI) (International MIDI Association, 1983), to produce aural cues for the distributed virtual environment of the Naval Postgraduate School Networked Vehicle Simulator (NPSNET) (Zyda et al., 1993a, 1993b, 1995; Macedonia et al., 1995). This research specifically examines the ability of both headphone (closed-field) and loudspeaker (open-field) delivery systems to integrate aural cues in large-scale, distributed virtual environments that comply with Distributed Interactive Simulation (DIS).


Headphone Systems

We are investigating alternative headphone delivery systems for NPSNET. Proposed headphone delivery systems must render spatially a minimum of eight simultaneous sound events in realtime. For comparison purposes, this capability benchmark is driven by use of the Acoustetron II by Crystal River Engineering (CRE) (CRE, 1996). One system produces spatial sound on the same workstation that renders the graphics of a virtual simulation. The processing requirements to produce just one sound overwhelms the Central Processin Unit (CPU), thus causing unacceptable latency in both sound and visual presentation. Another approach dedicates a workstation as a sound server, rendering spatial sound for multiple clients, which are connected via a local-area network. Because of the large network bandwidth required to service multiple client requests for sound and then send the rendered sound to the requesting clients, this approach proves untenable.

A third approach creates a library of prerecorded positioned sound files. This system requires the host CPU only to determine position information, to locate the appropriate sound file, and to execute the prerecorded spatial sound. While relieving the CPU of the heavy burden of rendering real-time spatial sound, the sound-library approach introduces an average of 0.8 seconds of latency between sound event and sound presentation, which far exceeds the traditional 0.1 second latency threshold. Beyond this recognized threshold, human listeners begin to disassociate visual and aural events. Chief cause of the 0.8 second latency is the overhead of the UNIX file system—retrieving, opening, and reading sound data.

With the recent addition of the Acoustetron II, the AA-Lab not only provides researchers with a quality spatial sound system to enhance virtual simulation research, but also provides a performance benchmark to compare current and future research efforts in the area of closed-field spatial sound production. Such research continues to find alternative, low-cost methods of rendering spatial sound.


Loudspeaker Systems

The NPSNET-3D Sound Server (NPSNET-3DSS) is a MIDI-based loudspeaker sound system consisting of commercial sound equipment and student-written computer software, software designed to generate 3-D aural cues via a cube configuration of eight loudspeakers and known as The NPSNET Sound Cube (Storms, 1995). Using an algorithm similar to stereo panning, the system sends sound to the various speakers of the sound cube to create an apparent (phantom) image of the sound event relative to the azimuth and elevation of the listener. To enhance distance perception, the system adds synthetic reverberation, which is produced by two Ensoniq DP/4 Parallel Effects Processors, to discrete sound events via real-time MIDI modulation messages.

Current research efforts with the NPSNET-3DSS include

    • developing dynamic and moving sounds;
    • immersing listeners into NPSNET as mobile, free-standing human beings;
    • integrating the Emulator E4K (by E-mu Systems, Inc.), which has 128 voice polyphony;
    • conducting various anechoic chamber experiments to verify spatial effectiveness;
    • integrating headphone systems to test effectiveness of hybrid systems (e.g., loudspeakers combined with headphones);
    • testing the effectiveness of generating ambient sounds with binaural recordings through the Lexicon CP-1 Plus Digital Audio Environment Processor;
    • providing compatibility with the Virtual Reality Modeling Language standard;
    • and experimenting with the MIDI and audio capabilities of Silicon Graphics Inc. workstations.


Acknowledgments

AA-Lab research information is free to anyone interested. The research efforts within the AA-Lab are funded by a large number of US government agencies, including the Defense Advanced Research Projects Agency; the US Army Research Laboratories; the Defense Modeling and Simulation Office; the US Army TRADOC Analysis Center; the US Army Topographic Engineering Center; the US Army Simulation, Training, and Instrumentation Command and STRICOM's PM-DIS; the Office of Naval Research; and the Research, Development, Test, and Evaluation Division of the Naval Command, Control, and Ocean Surveillance Center.


References

Crystal River Engineering (CRE) (May 1996). World Wide Web Home Page. Available at http://www.cre.com/

Cockayne, William, Zyda, Michael, Barham, Paul, Brutzman, Don, & Falby, John. (1996). The laboratory for human interaction in the virtual environment. Presented at ACM Symposium on Virtual Reality Software and Technology (VRST) 1996 , Honk Kong: University of Hong Kong. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/publications.html

International MIDI Association. (1983). 1.0 MIDI Specification.

Macedonia, Michael R., Brutzman, Donald P., Zyda, Michael J., Pratt, David R., Barham, Paul T., Falby, John & Locke, John. (1995). NPSNET: A multi-player 3-D virtual environment over the internet. In Proceedings of the 1995 Symposium on Interactive 3-D Graphics . Monterey, California. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/publications.html

Storms, Russell L. (1995). NPSNET-3D Sound Server: An effective use of the auditory channel. Monterey, CA: Naval Postgraduate School. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/ publications.html

Zyda, Michael J., Pratt, David, Falby, John, Lombardo, Chuck, & Kelleher, Kristen. (1993a). The software required for the computer generation of virtual environments. Presence, 2 (2), 130-140. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/publications.html

Zyda, Michael J., Pratt, David, Falby, John, Barham, Paul, & Kelleher, Kristen. (1993b). NPSNET and the naval postgraduate school graphics and video laboratory. Presence, 2(3) 244-258. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/publications.html

Zyda, Michael J., Pratt, David R., Pratt, Shirley M., Barham, Paul T., & Falby, John S. (1995). NPSNET-HUMAN: Inserting the human into the networked synthetic environment. In Proceedings of the 13th DIS Workshop (pp. 18-22). Orlando. Available at http://www-npsnet.cs.nps.navy.mil/npsnet/publications.html



Author Information

Naval Postgraduate School
Department of Computer Science
Monterey, California 93943-5118
USA Email: {storms, ljbiggs, cockayne, barham, falby, brutzman} @ cs.nps.navy.mil
zyda@siggraph.org