ICAD logo

bibliography

A 
B 
C 
D 
E 
F 
G 
H 
I 
J 
K 
L 
M 
N 
O 
P 
Q 
R 
S 
T 
U 
V 
W 
X 
Y 
Z 

Ballas, J. A. "Delivery of Information Through Sound." In Auditory Display: Sonification, Audification, and Auditory Interfaces, edited by G. Kramer. Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. XVIII. Reading, MA: Addison Wesley, 1994.

Ballas presents an overview of how different forms of information can be effectively delivered through nonspeech sound. The coverage is organized by linguistic devices. In addition, some details are presented on the importance of listener expectancy, and how it may be measured.

Ballas, J. A. "Common Factors in the Identification of an Assortment of Brief Everyday Sounds." J. Exp. Psych.: Hum. Percep. & Perf. 19 (1993): 250--267.

Ballas presents five experiments conducted to investigate factors that are involved in the identification of brief everyday sounds. In contrast to other studies, the sounds were quite varied in type, and the factors studied included acoustic, ecological, perceptual, and cognitive. Results support a hybrid approach to understanding sound identification.

Ballas, J. A., and T. Mullins. "Effects of Context on the Identification of Everyday Sounds." Hum. Perf. 4(3) (1991): 199--219.

The authors present the results of four experiments conducted to investigate the effects of context on the identification of brief everyday sounds. The sounds were nearly homonymous (i.e., similar sounds produced by different causes). Results showed that context had a significnt effect, especially in biasing listeners against a sound cause that was inconsistent with the context.

Ballas, J. A., and J. H. Howard, Jr. "Interpreting the Language of Environmental Sounds." Envir. & Beh. 19 (1987): 91--114.

The authors present some comparisons between the perceptual identification of environmental sounds and well-studied speech perception processes. Comparisons are made at the macro level, as well as in the details.

Begault, D. R., and E. M. Wenzel. "Headphone Localization of Speech." Hum. Factors 35(2) (1993): 361--376.

An empirical study of subjects judging the position of speech presented over headphones using nonindividualized HRTFs. Subjects expressed their judgments by saying their estimate of distance and direction after each speech segment was played. Patterns of errors are described, and it is concluded that useful azimuth judgements for speech are possible for most subjects using nonindividualized HRTFs.

Bidlack, R. "Chaotic Systems as Simple (but Complex) Compositional Algorithms." Comp. Music J. 16(3) (1992): 33--47.

Bidlack describes his portrayal of nonlinear mathematical events within his music much the same as earlier composers utilized such phenomena as prime numbers and the Fibonaci series.

Blattner, Meera M., Ephraim P. Glinert, and Albert L. Papp, III. "Sonic Enhancements for Two-Dimensional Graphic Displays." In Auditory Display: Sonification, Audification, and Auditory Interfaces, edited by G. Kramer. Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. XVIII. Reading, MA: Addison Wesley, 1994.

By studying the specific example of a visually cluttered map, the authors suggest general principles that lead to a taxonomy of characteristics for the successful utilization of nonspeech audio to enhance the human-computer interface. Their approach is to divide information into families, each of which is then separately represented in the audio subspace by a set of related earcons.

Blattner, Meera M. "Sound in the Multimedia Interface." In The Proceedings of Ed-Media '93, held June 23--26, 1993, in Orlando, Florida, 76--81. Association for the Advancement of Computing in Education, 1993.

The focus of this article is on recent developments in audio; however, the motivation for the use of sound is to provide a richer learning experience. This article begins with a description of the flow state, that state of mind in which we are deeply involved with what we are doing, and proposes some techniques for achieving the flow state through our use of audio.

Blattner, M. M., and R. M. Greenberg. "Communicating and Learning Through Non-speech Audio." In Multimedia Interface Design in Education, edited by A. Edwards and S. Holland. NATO ASI Series F, 133--143. Berlin: Springer-Verlag, 1992.

This article begins with an examination of the way structured sounds have been used by human beings in a variety of contexts and goes on to discuss the how the lessons from the past may help us in the design and use of sound in the computer-user interface. Nonspeech sound messages, called earcons, are described with an application to the study of language.

Blattner, M. M., R. M. Greenberg, and M. Kamegai. "Listening to Turbulence: An Example of Scientific Audiolization." In Multimedia Interface Design, edited by M. Blattner and R. Dannenberg, 87--102. Reading, MA: ACM Press/Addison-Wesley, 1992.

The authors discuss some of the sonic elements that could be used to represent fluid flow.

Blattner, Meera M., and R. B. Dannenberg, eds. Multimedia Interface Design. Reading, MA: ACM Press/Addison-Wesley, 1992. To be published in Chinese by Shanghai Popular Press, 1994.

Eight of the 21 chapters of this book are focused on sound in the multimedia interface. Many of the other chapters consider the role of sound as one of the elements of components of the multimedia interface.

Blattner, M. M., D. A. Sumikawa, and R. M. Greenberg. "Earcons and Icons: Their Structure and Common Design Principles." Hum.-Comp. Inter. 4(1) (1989): 11--44.

This article describes earcons, auditory messages used in the computer-user interface to provide information and feedback. The focus of the article is on the structure of earcons and the design principles they share with icons.

Blauert, J. Spatial Hearing: The Psychophysics of Human Sound Localization. Cambridge, MA: MIT Press, 1983.

The author provides a thorough overview of the psychophysical research on spatial hearing in Europe (particularly Germany) and the United States prior to 1983. Classic text on sound localization.

Blauert, J. "Sound Localization in the Median Plane." Acustica 22 (1969): 205--213.

The author describes a series of experiments that demonstrated the role of linear distortions caused by pinna filtering in localizing sound in the median plane. He demonstrates that the "duplex theory," which postulated interaural time and intensity differences as the cues for localization, was not sufficient to explain all localization phenomena.

Bly, S. "Sound and Computer Information Presentation." Unpublished doctoral dissertation, University of California, Davis, 1982.

Bly evaluates auditory displays for three classes of data: multivariate, logarithmic, and time-varying. A series of formal experiments on multivariate data displays were conducted, demonstrating that in many cases auditory displays elicited human performance equal to or greater than that elicited by conventional visual displays.

Bly, S., S. P. Frysinger, D. Lunney, D. L. Mansur, J. J. Mezrich, and R. C. Morrison. "Communication with Sound." In Readings in Human-Computer Interaction: A Multidisciplinary Approach, edited by R. Baecker and W. A. S. Buxton, 420--424. Los Altos: Morgan Kaufmann, 1987.

Contributors discussed their approaches to communicating data via sound at CHI '85 and this chapter is a result of that presentation. This is the first time that there was a national conference session dedicated exclusively to the general use of nonspeech audio for data representation.

Boff, K. R., L. Kaufman, and J. P. Thomas. Handbook of Perception and Human Performance. Sensory Processes and Perception, Vol. 1. New York: John Wiley & Sons, 1986.

Various sound parameters are delineated and discussed, including their interpretation by individuals having auditory pathologies. An excellent first source for the definition of sound parameters and inquiry into the complexities of sonic phenomena.

Boff, K. R., and J. E. Lincoln, eds. Engineering Data Compendium: Human Perception and Performance. Ohio: Armstrong Aerospace Medical Research Laboratory, Wright-Patterson Air Force Base, 1988.

This three-volume compendium distills information from the research literature about human perception and performance that is of potential value to systems designers. Plans include putting the compendium on CD. A separate user's guide is also available.

Borin, G., G. De Poli, and A. Sarti. "Algorithms and Structures for Synthesis Using Physical Models." Comp. Music J. 16(4) (1993).

This is the introductory article to two special issues on physical modeling for sound synthesis in this excellent journal; this article reviews techniques.

Bregman, A. S. "Auditory Scene Analysis." In Proceedings of the 7th International Conference on Pattern Recognition, held in Montreal, 168--175, 1984.

Classic paper in which the concept of a positive assignment of components of a complex acoustic signal into multiple perceptual streams was first introduced

Bregman, A. S., and Y. Tougas. "Propagation of Constraints in Auditory Organization." Percep. & Psycho. 46(4) (1989): 395--396.

The authors present psychoacoustic evidence that grouping occurs on the basis of all evidence in the acoustic signal. This is not consistent with grouping as a consequence of the output from particular filters.

Bregman, A. S. Auditory Scene Analysis. Cambridge, MA: MIT Press, 1990.

Bregman provides a comprehensive theoretical discussion of the principal factors involved in the perceptual organization of auditory stimuli, especially Gestalt principles of organization in auditory stream segregation.

Brewster, S. A., P. C. Wright, and A. D. N. Edwards. "A Detailed Investigation into the Effectiveness of Earcons." In Auditory Display: Sonification, Audification, and Auditory Interfaces, edited by G. Kramer. Santa Fe Institute Studies in the Sciences of Complexity, Proc. Vol. XVIII. Reading, MA: Addison Wesley, 1994.

The authors carried out experiments on structured audio messages, earcons, to see if they were an effective means of communicating in sound. An initial experiment showed earcons to be better than unstructured sounds and that musical timbres were more effective than simple tones. A second experiement was carried out to develop ideas from the first. A set of guidelines are presented.

Bronkhorst, A. W., and R. Plomp. "The Effect of Head-Induced Interaural Time and Level Differences on Speech Intelligibilty in Noise." J. Acous. Soc. Am. 83 (1988): 1508--1516.

Spoken sentences in Dutch with noise were recorded in an anechoic room with a KEMAR manikin, and the role of interaural time delay and headshadowing on intelligibility was studied.

Brown, M. H. "An Introduction to Zeus: Audiovisualization of Some Elementary Sequential and Parallel Sorting Algorithms." In CHI '92 Proceedings, 663--664. Reading, MA: ACM Press/Addison-Wesley, 1992.

Visualization and sonification of parallel programs demonstrate that sound can reinforce, supplant, and expand the visual channel.

Brown, M. L., S. L. Newsome, and E. P. Glinert. "An Experiment into the Use of Auditory Cues to Reduce Visual Workload." In CHI '89 Proceedings, 339--346. Reading, MA: ACM Press/Addison-Wesley, 1989.

Sound is presented as a means to reduce visual overload. However, subject testing revealed a doubled reaction time for sound cueing vs. visual cueing. The authors recommend aural training for effective implementation.

Buford, J. K. "Multimedia Systems." Reading, MA: ACM/Addison-Wesley, 1993.

Provides a technical overview of multimedia systems, including information on sound and video recording, signal processing, system architectures, and user interfaces. Covers fundamental principles, applications and current research in multimedia, as well as operating systems, database management systems, and network communication.

Burdic, W. S. Underwater Acoustic System Analysis. Englewood Cliffs, NJ: Prentice-Hall, 1984.

This book provides a good general background in the fundamentals of sonar systems, including a historical background, basic acoustics, transducers, ocean acoustics, sonar signal processing, decision theory, beamforming, and active and passive systems. Although the presentation is sometimes mathematically technical, no specific background is assumed.

Burgess, D. "Techniques for Low Cost Spatial Audio." In UIST '92: User Interface Software and Technology. Reading, MA: ACM Press/Addison-Wesley, in press.

Burgess describes a technique for synthetic spatialization of audio and the computational requirements and performance of the technique.

Burns, E. M., and D. Ward. "Intervals Scales and Tuning." In The Psychology of Music, edited by Diana Deutsch. New York: Academic Press, 1982.

A discussion of sensory consonance and dissonance reviews the work of earlier researchers and the conclusions regarding pure and complex tone interpretation. A good base reference for further exploration into consonance and dissonance.

Butler, R. A., and R. A. Humanski. "Localization of Sound in the Vertical Plane With and Without High-Frequency Spectral Cues." Percep. & Psycho. 51(2) (1992): 182--186.

Noise bursts were played over seven loudspeakers spaced 15 degrees apart in the vertical plane, and subjects judged the position of the sources. The authors conclude from the results that, without pinna cues, subjects can still localize low-pass noise in the lateral vertical plane using binaural time and level differences, but that pinna cues are critical for accurate localization in the median vertical plane.

Buttenfield, B. P., and C. R. Weber. "Visualization and Hypermedia in GIS." In Human Factors in GIS, edited by H. Hearnshaw and D. Medyckyj-Scott. London: Belhaven Press, in press.

An overview of sonification types is presented for their implementation into cartographic displays and Geographic Information Systems.

Buxton, W., W. Gaver, and S. Bly. "The Use of Non-speech Audio at the Interface." Tutorial no. 10, given at CHI '89.

A good overview of the use of nonspeech audio, the psychology of everyday listening, alarms and warning systems, and pertinent issues from psychoacoustics and music perception. A number of classic papers are reproduced.

Buxton, B. "Using Our Ears: An Introduction to the Use of Nonspeech Audio Cues." In Extracting Meaning from Complex Data: Processing, Display, Interaction, edited by E. J. Farrel, Vol. 1259, 124--127. SPIE, 1990.

An overview of the classes of audio cue and their utility.

Buxton, W., and T. Moran. "EuroPARC's Integrated Interactive Intermedia Facility (iiif): Early Experiences." In Proceedings of the IFIP WG8.4 Conference on Multi-User Interfaces and Applications, held September, 1990, in Herakleion, Crete.

The authors review the design, technology, and early applications of EuroPARC's media space; a computer-controlled network of audio and video gear designed to support collaboration.

A 
B 
C 
D 
E 
F 
G 
H 
I 
J 
K 
L 
M 
N 
O 
P 
Q 
R 
S 
T 
U 
V 
W 
X 
Y 
Z 


Home

Webmasters