ICAD logo

The Paper Abstracts
for
The International Conference on Auditory Display '92


James A. Ballas(Naval Research Laboratory)Delivery of Information Through Sound


Robin Bargar(NCSA) Pattern and Reference in Auditory Display


Meera M. Blattner(Anderson Cancer Research Center)


Albert L. Papp III(University of California, Davis)


Ephraim P. Glinert(Rensselaer Polytechnic Institute)
Sonic Enhancement of Two-Dimensional Graphics Displays


Sara Bly(Xerox PARC) Multivariate Data Mappings


Stephen A. Brewster (University of York)


Peter C. Wright(University of York)


Alistair D. N. Edwards (University of York) Detailed Investigation into the Effectiveness of Earcons
Jonathan Cohen (Apple Computer, Inc.) Monitoring Background Activities
W. Tecumseh Fitch (Brown University)
Gregory Kramer (Clarity/Santa Fe Institute) Sonifying the Body Electric: Superiority of an Auditory over a Visual
Display in a Complex, Multivariate System
William W. Gaver (Rank Xerox Cambridge EuroPARC)
Using and Creating Auditory Icons
Chris Hayward (Southern Methodist University) Listening to the Earth Sing
Jay Alan Jackson (University of Southwestern Louisiana)
Joan M. Francioni (University of Southwestern Louisiana) Synchronization of Visual and Aural Parallel Program Performance Data
David H. Jameson (IBM T. J. Watson Research Center) Sonnet: Audio-Enhanced Monitoring and Debugging
Gregory Kramer (Clarity/Santa Fe Institute) Some Organizing Principles for Representing Data with Sound
Tara M. Madhyastha (University of Illinois)
Daniel A. Reed (University of Illinois) A Framework for Sonification Design
Gottfried Mayer-Kress (University of Illinois at Urbana-Champaign)
Robin Bargar (University of Illinois at Urbana-Champaign)
Insook Choi (University of Illinois at Urbana-Champaign) Musical Structures in Data from Chaotic Attractors
Kevin McCabe (Sterling Software)
Akil Rangwalla (MCAT Institute) Auditory Display of Computational Fluid Dynamics Data
Elizabeth D. Mynatt (Georgia Institute of Technology) Auditory Presentation of Graphical User Interfaces
Carla Scaletti (Symbolic Sound Corporation and University of Illinois)Sound Synthesis Algorithms
for Auditory Data Representations


Stuart Smith (University of Massachusetts)

Ronald M. Pickett (University of Massachusetts)
Marian G. Williams (University of Massachusetts)
Environments for Exploring Auditory Representations of Multidimensional Data

Elizabeth M. Wenzel (NASA Ames Research Center) Spatial Sound and Sonification

Sheila M. Williams (University of Sheffield) Perceptual Principles in Sound Grouping

Delivery of Information Through Sound
James A. Ballas Naval Research Laboratory
Code 5535 Washington, DC 20375--5337
(202) 404-7988 ballas@itd.nrl.navy.mil

The potential to deliver information through sound is rapidly expanding with new technology, new techniques, and significant advances in our understanding of hearing. Although these changes raise important new issues about the design of sound delivery systems, there is already a wide range of knowledge scattered through different disciplines about communicating information through nonspeech sound such as sonification. An overview of how sound can deliver information is presented using a framework of linguistic analogies.

Areas that will be discussed in some detail include contextual and expectancy effects, which operate when tonal sounds as well as realistic sounds are interpreted.


Pattern and Reference in Auditory Display
Robin Bargar
National Center for Supercomputing Applications,
and School of Music University of Illinois at Urbana-Champaign
152 Computing Applications Building
605 East Springfield Avenue Champaign, IL 61820
(217)244-4692 rbargar@ncsa.uiuc.edu

This paper addresses the potential for identifying common concerns and collaborative potentials linking scientific research methods with the field of auditory display, a field that is closely related to music composition. The capability of listeners to differentiate sounds meaningfully is a complex construct that involves a system having a sound-producing potential and an organized observation of that system by a sound designer who may be considered a composer. By describing the application of music composition techniques to the auditory display of scientific data, a connection can be established between compositional thought processes and scientific observation.


Sonic Enhancement of Two-Dimensional Graphics Displays
Meera M. Blattner Department of Biomathematics M.D.
Anderson Cancer Research Center
University of Texas Medical Center, Houston Albert L. Papp III

Department of Applied Science University of California
Davis, and Lawrence Livermore National Laboratory Livermore, CA 94551

Ephraim P. Glinert
Dept. of Computer Science
Rensselaer Polytechnic Institute Troy, NY 12180

By studying the specific example of a visually cluttered map, we discover general principles that lead to a taxonomy of characteristics for the successful utilization of nonspeech audio to enhance the human-computer interface. Our approach is to divide information into families, each of which is then separately represented in the audio subspace by a set of related earcons.

Animations are used to introduce these earcons to the user, so as to link each earcon in his/her mind with a visual representation. From then on, the earcon suggests the corresponding visual representation to the user, including any real-world sound that may be associated with it, even though the earcon itself is not a real-world sound.


Multivariate Data Mappings
Sara Bly
Xerox PARC
3333 Coyote Hill Road Palo Alto, CA 94304
bly@parc.xerox.com

An on-going issue in data exploration is how best to represent the data to support finding its structure and patterns. Visualization techniques are traditionally useful but audio representations are being explored as well.

This paper describes an exercise in which three different aural mappings were presented for the same six-dimensional data. Informal observations of the use of these mappings indicated the importance of identifying structure in the data and of integrating data exploration techniques.


A Detailed Investigation into the Effectiveness of Earcons
Stephen A. Brewster
HCI Group
Department of Computer Science
University of York Heslington, York, Y01 5DD, UK
Tel.: 0904 432765 sab@minster.york.ac.uk

Peter C. Wright
HCI Group
Department of Computer Science
University of York Heslington, York, Y01 5DD, UK
Tel.: 0904 432765 sab@minster.york.ac.uk

Alistair D. N. Edwards
HCI Group
Department of Computer Science
University of York Heslington, York, Y01 5DD, UK
Tel.: 0904 432765 sab@minster.york.ac.uk

A detailed experimental evaluation of earcons was carried out to see whether they are an effective means of communicating information in sound. An initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more effective than simple tones.

Musicians were shown to be no better than non-musicians when using musical timbres. A second experiment was then carried out which improved upon some of the weaknesses of the pitches and rhythms used in Experiment 1 to give a significant improvement in recognition. From the results some guidelines were drawn up for designers to use when creating earcons.

These experiments have formally shown that earcons are an effective method for communicating complex information in sound.


Monitoring Background Activities
Jonathan Cohen
ATG Human Interface Group
Apple Computer, Inc.,
One Infinite Loop MS 301-3H, Cupertino, CA 95014
(408)974-2884 cohenj@applelink.apple.com

The personal computer is increasingly becoming a center for delegated and autonomous background activity. How can users be notified about this activity without having their foreground task disrupted?

The audio channel offers a number of advantages for notification. ShareMon, a prototype application, employs audio---sound effects or text-to-speech---or graphical messages to notify users about file sharing (a type of background activity). Informally, and in the course of a user study with ShareMon, users found all three modalities informative, but they found that all of the modalities disrupted their foreground activity to some extent.

These reactions raise two issues: when is the use of a particular modality appropriate, and how can a sound be designed to be simultaneously informative, pleasant, and/or unobtrusive? Although I discuss a theoretical approach to these issues, I argue that an empirical design-and-test methodology provides a powerful way to resolve them.


Sonifying the Body Electric:
Superiority of an Auditory over a Visual Display in a Complex, Multivariate System
W. Tecumseh Fitch
Department of Cognitive and Linguistic Sciences
Brown University Providence, RI 02912

Gregory Kramer
Clarity/Santa Fe Institute
SW 19th Street Portland, OR 97201

Recent advances in the technology of computer sound generation allow sound to play a new role in human/machine interfaces. However, few studies have investigated the use of sound to display complex data in a practical setting.

In this paper we introduce an auditory display for physiological data and compare it experimentally with a standard visual display. Subjects (college students) played the role of anesthesiologists, attempting to keep a computer-simulated "digital patient" alive and healthy through a series of operating room emergencies. Both the task and the stimuli were complex: subjects had to monitor eight continuously changing variables simultaneously, to identify problems (indicated by changes in one or three variables at once), and then to correct those problems. We found that subjects performed faster and more accurately when using the auditory display than when using the visual display.

This difference was most pronounced with multivariate changes. We hypothesize that the auditory advantage may result from the inherent ability of the auditory system to process multiple auditory "objects" or "streams" simultaneously in parallel, in contrast to the visual system's propensity for processing multiple objects serially. If correct, this idea has important implications for the use of sound in computer interfaces.


Using and Creating Auditory Icons
William W. Gaver
Rank Xerox Cambridge EuroPARC
61 Regent Street Cambridge CB2 1AB, UK gaver@europarc.xerox.com

Auditory icons are everyday sounds that convey information about events in the computer or in remote environments by analogy with everyday sound-producing events. Several examples of interfaces that use auditory icons demonstrate that they can add valuable functionality to computer interfaces, particularly when they are parameterized to convey dimensional information. But because they are based on a new approach to sound and hearing that emphasizes perceptual and acoustic attributes of auditory event perception, they are difficult to create and manipulate if standard synthesis and sampling techniques are used.

In order to support their creation, new synthesis algorithms are introduced which are controlled along dimensions of events rather than those of the sounds themselves. Several algorithms, developed from research on auditory event perception, are described in enough detail here to permit their implementation. They produce a variety of impact, bouncing, breaking, scraping, and machine sounds. By controlling them with attributes of relevant computer events, a wide range of parameterized auditory icons may be created.


Listening to the Earth Sing
Chris Hayward
Southern Methodist University Department of Geology, Dallas TX

Techniques for auditory monitoring and analysis of seismic data are described. Unlike many other kinds of data, seismograms may be successfully audified with a minimum of processing. The technique works so well because both sound in air and seismic waves in rock follow the same basic physics, that described by the elastic wave equation.

Both exploration seismology, which examines with only the upper few miles of the earth, and planetary seismology, which examines the larger structures including the earth core, may make use of auditory display. Previous published work is limited to two papers now nearly 30 years old, which examine the utility of audio display to the discrimination problem of earthquakes and nuclear explosions.

The applications are much broader though, including training, quality control, free oscillation display, data discovery, large data set display, event recognition, education, model matching, signal detection, and onset timing. Problems in audifying seismograms arise when the subsonic wide dynamic range signals must be rescaled to the audio without introducing distracting artifacts.

Simple processing techniques including interpolation, time compression, automatic gain control, frequency doubling, audio annotation and markers, looping, and stereo are used to create seven example audio data sets. These seven examples illustrate the use of audio in presenting synthetic seismograms, shallow reflection data, quality control during field recording, noise analysis for earthquake observatories, earthquake analysis for events from various distances, nuclear explosions, and stereo display of seismic array data. The use of audio for seismic quality control, analysis, and interpretation will develop only when audio displays become integrated into the daily tools of seismologists.


Synchronization of Visual and Aural Parallel Program Performance Data
Jay Alan Jackson and Joan M. Francioni
Computer Science Department
University of Southwestern Louisiana Lafayette, LA 70504-1771
jaj7298@ucs.usl.edu
318/231-6768

Understanding the behavior of a program that runs on a parallel computer poses a challenge to programmers due to the difficulties of analyzing multiple concurrent events. In order to test, debug, and tune the performance of a parallel program, it is necessary to study information such as interprocessor communication logs and processor utilization profiles. Visual tools have been developed to make this job easier, but often substantial effort is still required to interpret and comprehend multiple graphical and textual views.

In this paper, we discuss the properties of parallel programs that are suitable for aural representation and present a number of examples of sound mappings that have been implemented. A prototype tool which provides synchronized visual and aural displays for depicting parallel program behavior is described, and the justification for, and the effectiveness of, this approach are discussed.

In general, sound was found to be a natural medium in which to realize certain patterns and timing information related to the run time performance of a parallel program. By providing combined visual and aural cues, speed of recognition and retention of relevant details were observed to be enhanced over either method alone.


Sonnet: Audio-Enhanced Monitoring and Debugging
David H. Jameson
IBM T. J. Watson Research Center
Yorktown Heights, NY 10598

Sonnet is an audio-enhanced debugger under development in the Mathematical Sciences Department at IBM Research.

The issues in which we are interested include the use of sophisticated yet easy-to-understand sounds to aid in understanding program execution, how to shift the programmer's focus from the narrow line-oriented view of a program to a global gestalt or holistic view, and finally how to provide a lightweight graphical user interface that allows run-time interaction rather than postmortem analysis.

An important goal of Sonnet is that it should be easy to predict what sounds will be generated in advance of execution. Should an unexpected sound be produced, the user may then investigate more closely in the hope of finding the anomaly. A prototype was built on top of an internal debugger for the IBM RS/6000 workstation to evaluate the feasibility and usefulness of the sounds. A new system is now in operation and incorporates the features recognized as important from our original experiments. This paper describes the current work in progress.


Some Organizing Principles for Representing Data with Sound
Gregory Kramer
Clarity Nelson Lane
Garrison, NY 10524 kramer@santafe.edu

Techniques for auditory data representation, and the perceptual issues they raise, are discussed. Sonification, audification, and audiation are defined in terms of mediating structures between the data and the listener. A software system for sonification research is described and parameter nesting, the control of a single auditory variable on several time scales simultaneously, is suggested as a technique for achieving high-dimensional displays.

The use of both realistic and abstract sounds for auditory display is discussed in the context of parameter nesting. Techniques for the weighting and balancing of attentionally compelling display components are discussed and it is suggested that a 100% balanced display can only be approximated. The technique of using "beacons" for orienting oneself within an auditory display is discussed and examples of applications are suggested.

Gestalt formation is recognized as an operant factor for auditory display in general and beacons in particular. The techniques of data family/stream association, data type/parameter association, global, inter-stream and per stream linking, and metaphorical and affective association are described and suggested as the means of making sonification displays more intuitive, comprehensible, and easier to use.


A Framework for Sonification Design
Tara M. Madhyastha and Daniel A. Reed
Department of Computer Science University of Illinois Urbana, Illinois 61801

One of the obstacles to widespread experimentation with sonification of data has been the lack of a standard model for sound generation, and a standard interface to control that model. This paper describes Porsonify, a tool kit that provides a uniform network interface to sound devices through table-driven sound servers.

Sonifications can be constructed that encapsulate all device-specific functions in control files for each server. A user interface to configure sound devices and sonifications can be generated independent of the underlying hardware. This framework was easily integrated with Pablo, an environment designed to support the performance analysis of massively parallel computer systems, providing synchronized sound and graphics.

Several sonifications of both multivariate data and time-varying performance data, created in this environment, are described. We conclude with a brief description of planned extensions, including integration with a virtual reality system.


Musical Structures in Data from Chaotic Attractors
Gottfried Mayer-Kress Center for Complex Systems Research Department of Physics 3025
Beckman Institute
405 N. Mathews Urbana, IL 61801 gmk@pegasos.ccsr.uiuc.edu (NeXT-Mail)
Robin Bargar
National Center for Supercomputing Applications
and School of Music University of Illinois at Urbana-Champaign
Insook Choi Computer Music Project and Experimental Music Studios Composition
Division School of Music University of Illinois at Urbana-Champaign

One of the most prominent aspect of data from natural phenomena is that of irregularity and complexity. Many universal aspects of such phenomena can be described in the context of chaotic dynamics, and chaotic attractors can serve as model generators of such data. Auditory representations used in conjunction with chaotic attractors can be designed to reveal the unique properties of nonlinear dynamical systems representing complex phenomena.

The design of such an auditory representation can benefit from being informed by observations common to both chaotic and musical structure. Recurrence structures in chaotic systems, including intermittency and self-similarity, are compatible characteristics for drawing analogies to musical structures. In this paper we explore several designs for auditory representation of chaotic systems. These include both low-level methods (where the sequence of system states is mapped directly onto auditory parameters); and higher-level methods which map derived statistical quantities, such as the approximations of the probability distribution (measure) of an attractor into polyphonic auditory constructions.

We focus on a few simple dynamical systems where we have a clear understanding of the structure of chaotic attractors, so that we can draw analogies between their representation using sound and the complex non-linguistic structures found in music. Using these analogies we can develop a new generation of auditory representation tools.


Auditory Display of Computational Fluid Dynamics Data
Kevin McCabe

Sterling Software Akil Rangwalla MCAT Institute Auditory Display (AD) is the mapping of values in some data space onto parameters in acoustic space. This paper discusses some of the motivations and techniques for using auditory display in the analysis of data generated from computational fluid dynamics (CFD) simulations.

Two simulations are used as case studies. In the first case, data from a simulation of the Penn State artificial heart pump is analyzed using a technique called parameter mapping. In the second case the tonal acoustics of rotor-stator interaction inside turbomachinery are directly simulated.


Auditory Presentation of Graphical User Interfaces
Elizabeth D. Mynatt
Graphics, Visualization & Usability Center College of Computing
Georgia Institute of Technology Atlanta, GA 30332-0280
beth@cc.gatech. edu

The majority of work in auditory user interfaces has focused on adding auditory cues to a visual interface. This paper presents work in designing interactive auditory-only interfaces where the design of the interface is driven by a challenging task---providing access to graphical user interfaces for people who are blind.

This task requires that the auditory-only interface must be able to provide the same functionality supported by a graphical user interface. A prototype system called Mercator is described as well as some of the design strategies for Mercator's auditory interfaces. The results from a small user study are also presented with a concluding discussion on future research efforts.


Sound Synthesis Algorithms for Auditory Data Representations
Carla Scaletti
Symbolic Sound Corporation and University of Illinois
P.O. Box 2530 Champaign, IL 61825-2530, USA
(217)355-6273 c-scaletti@uiuc.edu

A working definition of sonification is presented, and previous work is categorized as sound at the user interface, applications in specific domains, studies of sonification itself, or general tools and systems development. A brief history is given for the technology of sound synthesis; the author's sound specification language Kyma is defined and its application in data-as-event, data-as-signal, real-time,and interactive approaches to sonification is explained.

Several sound synthesis techniques---data as samples, multiplication, addition, granular synthesis, nonlinear distortion techniques, filtering, physical modeling, and sampled sounds---are described and examples are given of their application in sonification. Some of the open questions in sonification are outlined, and some predictions are made on the future of sonification.


Environments for Exploring Auditory Representations of Multidimensional Data
Stuart Smith
Computer Science Department
University of Massachusetts Lowell, MA 01854
stu@cs.ulowell.edu

Ronald M. Pickett
Psychology Department
University of Massachusetts Lowell, MA 01854
pickett@cs.ulowell.edu
Marian G. Williams
Center for Productivity Enhancement
University of Massachusetts Lowell, MA 01854
mwilliam@cs.ulowell.edu

The field of auditory data representation has produced several intriguing proof-of-concept systems, but there has been little formal research to measure the effectiveness of auditory data displays or to increase our understanding of how they work and how to improve them. We argue that formal assessment is necessary throughout the process of developing new auditory display technologies in order to learn how to restrict the universe of possible sound attributes to those that are most effective for data representation.

The ability to run quick psychometric tests to obtain quantitative figures of merit for alternative auditory representations is a requirement for auditory display, researchers engaged in the development of new technologies. This capability can be realized with a special-purpose workstation designed to generate and administer psychometric tests automatically using test patterns generated from statistically well-specified synthetic data. We outline the requirements for such a workstation and describe a testing method for the development of a new type of auditory data displays that we have been working with for the last few years.


Spatial Sound and Sonification
Elizabeth M. Wenzel
Aerospace Human Factors Research Division
NASA Ames Research Center Mail Stop
262-2 Moffett Field, CA 94035-1000 415-604-6290
FAX:415-604-3729beth@eos.arc.nasa.gov

Immersive or artificially generated, three-dimensional environments are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still in its early developmental stages, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays.

This paper provides a brief description of three-dimensional sound synthesis and describes the performance advantages that can be expected when these techniques are applied to sound streams in sonification displays. Specific examples, and the lessons learned from each, are discussed for applications in telerobotic control, aeronautical displays, and shuttle launch communications.


Perceptual Principles in Sound Grouping
Sheila M. Williams
Departments of Computer Science and Psychology University of Sheffield
Regent Court 211 Portobello Street Sheffield, S1 4DP, UK s.williams@dcs.shef.ac.uk

Essential to the development of a theory of Auditory Display is a thorough understanding of the perception of complex sounds. An introduction to Auditory Grouping principles is presented here, explaining the difference between analytic and synthetic listening and introducing examples of different "gestalt" processes as they apply in the auditory mode.

Sound examples are provided to demonstrate each of these examples. This is followed by a brief overview of methods of investigating sound grouping and an outline of modeling methods that have been applied. The STREAMER computational model of perceptual grouping is introduced, together with a brief explanation of some of the experimental methods employed in the acquisition of data to support the development of the model. The chapter concludes with a consideration of the implications of sound grouping for the purposes of Auditory Display.


Home

Webmasters