ICAD'98 Logo

ICAD'98 Tutorials

Sunday, the 1st November will be a tutorial day at ICAD and we will be running a really interesting set of tutorials. You can find out all of the details below. Each of the tutorials will last half a day; Tutorials 1 and 2 in the morning, and 3 and 4 in the afternoon.

Make sure you book the tutorials when you register for ICAD. Numbers are limited so make sure that you book early. The tutorials cost £25 each for academic/academic related staff and industrial attendees, and are free to students (proof of student status will be required when you attend the conference). When booking please state the morning and afternoon tutorials you most prefer. We will then do our best to match you up to your preferences, if we cannot give you your first choice we will allocate you to the other tutorial (unless that is full).


bullet 1. Sounds of Action and Sounds of Silence

by Mikael Fernstrom, University of Limerick

In the Interaction Design Centre at the University of Limerick in Ireland, we have been working on various sonification projects for the past two years, including Direct Sonification for browsing music data sets to various ways to create widgets with sound, the latter based on an ecological approach. On the silent side of things we have conducted a series of simple experiments with blind and blindfolded sighted people trying to find out what people can hear in real environments with real objects, which we hope will guide us in the development of virtual sonic environments.

When trying to add meaningful sonic representations to actions in the human-computer interface, a number of quite different design possibilities can be considered, from an Earcon-based system to Auditory Icons to abstract sonifications. How can sound provide both qualitative and quantitative dynamic information simultaneously while a user is in action? With our Sound of Action Project (SOAP) the emphasis is not so much on the sound itself but instead on how the sound changes depending on the activity that the sound generating component is engaged in over time and location, e.g. filling container sound that recognizably can be heard to approach a full state, and whose filling rate can be determined roughly by the listener, for enhancement of a download status widget.

Is this approach a defensible one? Should the emphasis be on auditory systems where pure notes or sequence of notes carry the information? Is it important to develop a model and system where we can generate realistic everyday sounds for use in an auditory icon style interface.

To contextualise things and spaces (with and without VR), can auditory display techniques be an important component? Should large information spaces sound like cathedrals and small spaces sound like cupboards?

We'll have a number of demos (if desired).


bullet 2. Sonic Framing: Relating Aural Perception to Visual Experience

by Eric Somers, State University of New York

This workshop teaches a pedagogy for visual design in which sound structures are analyzed and used as a basis for creating visual compositions, and a related aural pedagogy in which visual art is analyzed and used as a basis for creating electro-acoustic sound compositions.

Using Professor Somers' re-framing concept, the sound composer analyzes a work of visual art, breaking down each element and studying the relationship of the elements in space, and uses this to create a sound composition in which elements are similarly related in time. This too, counters the traditional approach to teaching sound and music composition in which students listen to the work of other composers than write pieces have similar characteristics.

Both aspects of the pedagogy rely completely on non-representational aural and visual forms in order to ensure that the focus is on structure and composition, rather than on references to objects apart from the work of art. Thus in no case does one play a visual artist a sound that resembles a rocket taking off. There would be too much temptation to draw a rocket. In this pedagogy sound is used as a basis for organizing a visual composition, not for determining its visual reference to the outside world.

In the proposed workshop participants will:

  1. Learn the underlying theoretical principles of sensory re-framing as applied to the use of sound to enhance visual creativity and the use of visual art to stimulate aural imagination.
  2. See examples of student sound compositions and visual design resulting from use of of the re-framing pedagogies.
  3. Experience first hand, through the use of a series of workshop exercises, how increased visual imagination can result from basing visual design on aural experience.


bullet 3. User-Centred Design Principles and their Application in the Effective Design of Auditory Displays

by Bruce Walker, Rice University and IBM

New and exciting systems which use auditory display are being created in many scientific fields. However, the developers of these systems are rarely experts in design or usability, so the focus tends to be on the underlying technical aspects of the application, and not the usability of the system and its interface.

User Centered Design (UCD) is an industry-standard method of conceptualizing, prototyping and implementing a complete system, with the end-userıs tasks as the main focus of design. This leads to faster and cheaper development of more acceptable, user-friendly systems, which satisfy the technical, functional, and usability requirements of the task.

This workshop will introduce the philosophy of UCD, and the simple step-by-step methods for designing any type of system. We will discuss concepts such as compatibility and population stereotypes, plus specific guidelines for designing both visual and auditory displays. Participants will evaluate sample interfaces, work through example design problems, and prototype a new system from the ground up. Note that UCD is a widely-applicable design method; the examples and material in this workshop will cover a range of systems and interfaces, making this workshop valuable for designers of all types of interfaces, including both auditory and visual displays.

Bruce Walker is an Interface Design Consultant for IBM, and a PhD student in Human Factors Psychology at Rice University in Houston, Texas.


bullet 4. Psychophysics and Technology of Virtual Acoustic Displays

Elizabeth M. Wenzel, NASA Ames Research Center

Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues.

The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which the localization of real and synthesized stimuli are directly compared in psychophysical studies. To this end, the results of psychophysical experiments examining the perceptual validity of the synthesis technique will be reviewed and factors that can enhance perceptual accuracy and realism will be discussed. Of particular interest is the relationship between individual differences in HRTFs and in behavior, the role of reverberant cues in reducing the perceptual errors observed with virtual sound sources, and the importance of developing perceptually valid methods of simplifying the synthesis technique.

Recent attempts to implement the synthesis technique in real time systems will also be discussed and an attempt made to interpret their quoted system specifications in terms of perceptual performance.

Finally, some critical research and technology development issues for the future will be outlined.