ICAD logo

bibliography

A 
B 
C 
D 
E 
F 
G 
H 
I 
J 
K 
L 
M 
N 
O 
P 
Q 
R 
S 
T 
U 
V 
W 
X 
Y 
Z 

Albright, L., A. J. Jackson, and J. Francioni. "Auralization of Parallel Programs." SICHI Bull. 23(4) (1991): 86--87.

Reasons why the auditory systems excels at various tasks are outlined in this article describing parallel program debugging with sound.

Allen, J. B., and D. A. Berkley. "Image Model for Efficiently Modeling Small-Room Acoustics." J. Acoust. Soc. Am. 65 (1979): 943--950.

One of the core papers discussing the image model for the simulation of reverberant rooms which has been applied to the interactive synthesis of virtual acoustic sources.

Arons, B. "Interactively Skimming Recorded Speech." In Proceedings of the User Interfaces Software and Technology (UIST) Conference. Reading, MA: ACM Press/Addison-Wesley, 1993 (in press).

A nonvisual user interface for interactively skimming speech recordings is described. SpeechSkimmer uses simple speech-processing techniques to allow a user to hear recorded sounds quickly, and at several levels of detail. User interaction through a manual input device provides continuous real-time control of speed and detail level of the audio presentation.

Arons, B. "Techniques, Perception, and Applications of Time-Compressed Speech." In Proceedings of 1992 Conference, American Voice I/O Society, held Sep. 1992, 169--177.

A review of time-compressed speech including the limits of perception, practical time-domain compression techniques, and an extensive bibliography.

Arons, B. "A Review of the Cocktail Party Effect." J. Am. Voice I/O Soc. 12 (Jul. 1992): 35--50.

A review of research in the area of multichannel and spatial listening with an emphasis on techniques that could be used in speech-based systems.

Arons, B. "Hyperspeech: Navigating in Speech-Only Hypermedia." In Hypertext '91, 133-146. Reading, MA: ACM Press/Addison-Wesley, 1991.

Hyperspeech is a speech-only (nonvisual) hypermedia application that explores issues of speech user interfaces, navigation, and system architecture in a purely audio environment without a visual display. The system uses speech recognition input and synthetic speech feedback to aid in navigating through a database of digitally recorded speech segments.

Arons, B. Hyperspeech (videotape). ACM SIGGRAPH Video Rev. 88 (1993). InterCHI '93 Technical Video Program.

A four-minute video showing the Hyperspeech system in use.

Arons, B. "The Design of Audio Servers and Toolkits for Supporting Speech in the User Interface." J. Am. Voice I/O Soc. 9 (Mar. 1991): 27--41.

An overview of audio servers and design thoughts for toolkits built on top of an audio server to provide a higher-level programming interface. Arons describes tools for rapidly prototyping and debugging multimedia servers and applications. He includes details of a SparcStation-based audio server, speech recognition server, and several interactive applications.

Asano, F., Y. Suzuki, and T. Sone. "Role of Spectral Cues in Medial Plane Localization." J. Acous. Soc. Am. 88 (1990): 159--168.

A study of localization cues using simulated transfer functions simplified via auto-regressive moving-average models in order to study what cues are critical for median plane localization. The conclusion was that macroscopic patterns above 5 kHz are used to judge elevation, and macroscopic patterns in the high frequencies as well as microscopic patterns below 2 kHz are used for front-rear judgment.

Astheimer, P. "Sonification Tools to Supplement Dataflow Visualization." In Third Eurographics Workshop on Visualization in Scientific Computing, held April 1992, in Viareggio, Italy. (Also in Scientific Visualization--Advanced Software Techniques, edited by Patrizia Palamidese, 15--36. London: Ellis Horwood, 1993.)

Astheimer presents a detailed concept for the integration of sonification tools in dataflow visualization systems. The approach is evaluated with an implementation of tools within the apE-system of the Ohio Supercomputer Center and some examples.

Astheimer, P. "Realtime Sonification to Enhance the Human-Computer Interaction in Virtual Worlds." In Proceedings Fourth Eurographics Workshop on Visualization in Scientific Computing, Abingdon, held April 1993, in England.

An overview of IGD's virtual reality system "Virtual Design." Several acoustic rendering algorithms are explained concerning sound events, direct sound propagation, a statistical approach, and the image source algorithm.

Astheimer, P. "Sounds of Silence--How to Animate Virtual Worlds with Sound." In Proceedings ICAT/VET, held May 1993, in Houston, Texas, USA.

The author presents a concept for an audiovisual virtual reality environment. The facilities of IGD's virtual reality demonstration center and the architecture of the proprietary system "Virtual Design" are introduced. The general processing and data interpretation schema is explained.

Astheimer, P. "What You See is What You Hear--Acoustics Applied to Virtual Worlds." IEEE Symposium on Virtual Reality, held October 1993, in San Jose, California, USA. Los Alamitos, CA: IEEE Computer Society Press, 1993.

This paper concentrates on the realization and problems of the calculation of sound propagation in arbitrary environments in real time. A brief overview over IGD's virtual reality system "Virtual Design"; the basic framework is given. The differences between graphic and acoustic models and rendering algorithms are discussed. Possible solutions for the rendering and subsequent auralization phase are explained. Several examples demonstrate the application of acoustic renderers.

A 
B 
C 
D 
E 
F 
G 
H 
I 
J 
K 
L 
M 
N 
O 
P 
Q 
R 
S 
T 
U 
V 
W 
X 
Y 
Z 


Home

Webmasters