Listening to the Mind Listening
Concert of Sonifications at the Sydney Opera House

Creative Producer: Stephen Barrass

The Listening to the Mind Listening Concert will be held at the Sydney Opera House as part of the International Conference on Auditory Display ICAD2004 in Sydney from 6-9 July 2004

The music in the concert will be sonifications composed from the neural activity of a person listening to a piece of music. Sonification is the mapping of data into sounds for some purpose. A data set containing a recording of neural activity is available for download from the ICAD website as described in the Data section of this call. This is an invitation for you to submit a sonification of this data for the concert. Submissions are open to everyone. Ten of the submitted sonifications will be selected for the concert, an audio CD and accompanying booklet. The concert will be presented by the Sydney Opera House Studio and promoted to the general public


In his acceptance speech for the 1981 Nobel Prize for Medicine, David Hubel describes how the sound of a neuron firing led to his first important discovery.

"Our first real discovery came as a surprise.  We had been doing experiments for about a month . and were not getting very far. One day we made an especially stable recording. For 3 or 4 hours we got absolutely nowhere. Then we began to elicit some vague and inconsistent responses by stimulating somewhere in the mid-periphery of the retina. We were inserting the glass slide with its black spot into the slot of the ophthalmoscope when suddenly over the audiomonitor the cell went off like a machine gun. After some fussing and fiddling we found out what was happening. The response had nothing to do with the black dot. As the glass slide was inserted its edge was casting onto the retina a faint but sharp shadow, a straight dark line on a light background. That was what the cell wanted, and it wanted it, moreover, in just one narrow range of orientations."

Listening to the Mind Listening is a development of the technique of listening to neurons, but we will extend it to explore the neural activity of the entire brain. The goals of the concert are to

  • explore the idea that people can understand information from sonifications
  • stimulate a new aesthetic of form and function in sound
  • blur and cross the boundaries between sonification and music
  • compare and contrast sonification designs and techniques
  • investigate the listening activity of the mind using sounds instead of graphs


The concert is an investigation on the boundary of art and science. The sonifications need to be musically satisfying for a general audience, scientifically interesting to neuroscientists, and help develop design knowledge in the auditory display community. In order to open up artistic possibilities, whilst at the same time providing for comparison and analysis, we are imposing some simple constraints for the sonifications.

  • Data-driven. Sonification is a mapping of data into sounds for some purpose. The sonification should be the result of an explicit mapping from the data into sounds. The listener should be able to understand relations and structures in the data from the sonification.
  • Time is the binding. The timeline of the data must map directly to the timeline of the sonification. All other mapping decisions are completely open but we need to be able to compare pieces across time, and also compare them with the original data set and source piece of music. This means that the final sonification pieces will all be exactly the same duration as the data set, and original piece of music.
  • Reproducibility. The mapping of the data into sound must be described in a manner than can be reproduced by others. Mappings should be described explicitly. Different mappings will enable different perceptions of information in the data. The experiment should lay a foundation for scientific and aesthetic observations and ongoing development by the research community.


The human brain is made up of 100 billion neurons, each with thousands of connections with other neurons! However the brain is not homogenous - it is made up of many special purpose regions. Many of these regions are activated by sounds - starting from the cochlea, up the vestibulocochlear nerve, to the superior olive that processes directional cues, on to the pons for recognition and the thalamus that directs attention, as well as the primary and secondary auditory cortex that connect sounds with memories, emotions and thinking. Most techniques for observing neural activity are visual, but there is potential that sounds may provide alternative insights especially for temporal patterns such as the well-known alpha, beta, and gamma frequency bands. Below are some starting points for exploring sonification, neural activity, and human auditory processing.


The listener in our experiment was listening to a piece of music by award winning indigenous Australian composer David Page. The piece is 5 minutes long and has a wide dynamic range with natural and synthesised sounds and instruments that is characteristic of David's blend of traditional and contemporary styles. The actual piece of music is being kept under wraps so that it does not influence the composers in their mappings from the neural data structure into sound. The mystery will be revealed at the finale of the concert, when after the ten sonifications have been played we will hear the original piece of music.

David joined Bangarra Dance Theatre as resident composer and performer in 1991, collaborating on the music for Ninni, Praying Mantis Dreaming and the Atlanta Olympic Games flag handover ceremony in 1996, amongst other projects. He is particularly proud of his music for Ochres which was released as a CD through Larrikin records and won the 1995 Deadly Award for Best Soundtrack (National Indigenous Music, Sport, Entertainment and Community Awards). He went on to win that award for the next two years with Alchemy for the Australian Ballet in 1996, and Fish for Bangarra in 1997. In 2002 David received yet another Deadly, this time for Excellence in Theatrical Score.


The listener wore headphones to hear the music, and a cap with EEG sensors on it to record neural activity. The 26 sensor electrodes were arranged according to the 10-20 standard for EEG placement. The sensors are labelled by proximity over a regions of the brain (F=Front, T=Temporal, C=Central, P-Parietal, O=Occipital) followed by either a 'z' for the midline, or a number that increases as it moves further from the midline. Odd numbers (1,3,5) are on the left hemisphere and even numbers (2,4,6) on the right e.g. T4 is on the right temporal lobe, above the right ear. An additional 10 sensors were used to record heart-rate, skin conductance, eye movements, breathing and other data. The sensors were recorded as interleaved channels of signed 32 bit integers at a rate of 500 samples per second. The channels were separated into individually named files and converted to ascii format for simplicity of loading on different systems.

The data was recorded at the Brain Resource Company by Evian Gordon, Daniel Hermens, and Patrick Hopkinson, in collaboration with Stephen Barrass, on 21 November 2003.

(click on images to see larger versions)

Download the zipped data in ascii signed 32 bit integer format < ~10 MB > from

Download zipped data plots in jpg format < ~2 MB > from

Channel Description Coordinates


Forehead Left 17,0,1
ch02-Fp2         Forehead Right 343,0,1


Front Far Left 66,0,1
ch04-F3           Front Left 59,36,1
ch05-Fz           Front Midline 294,0,1
ch06-F4           Front Right 301,36,1
ch07-F8           Front Far Right 294,0,1
ch08-FC3        Front Centre Left 78,60,1
ch09-FCz         Front Centre Midline 0,60,1
ch10-FC4        Front Centre Right 282,60,1
ch11-T3           Temporal Left (above ear) 90,0,1
ch12-C3          Central Left 90,36,1
ch13-Cz           Central Midline 0,90,1
ch14-C4          Central Right 270,36,1
ch15-T4           Temporal Right (above ear) 270,0,1


Central Parietal Left 120,60,1
ch17-CPz         Central Parietal Midline 180,60,1
ch18-CP4        Central Parietal Right 240,60,1
ch19-T5 Temporal Left Back (behind ear) 135,0,1
ch20-P3           Parietal Left 149,36,1


Parietal Central 180,36,1
ch22-P4           Parietal Right 211,36,1
ch23-T6 Temporal Right Back (behind ear) 225,0,1
ch24-O1          Occipital Left 170,0,1
ch25-Oz           Occipital Midline 180,0,1
ch26-O2          Occipital Right 190,0,1
ch27-VPVA     Vertical Above - 1cm above the left eye 20,10,1
ch28-VNVB    Vertical Below - 1cm below the left eye 20,0,1
ch29-HPHL     Horizontal Left - 1cm outside of left eye 25,0,1
ch30-HNHR    Horizontal Right - 1cm outside of right eye 335,0,1
ch31-Erbs        Erbs point references (mimics) heart rate  
ch32-OrbOcc  Orbicularis Occuli (1cm outside of VB) - measures startle  
ch33-Mass       Masseter (jaw muscle) measures jaw clenching


Electrodermal activity - Sweat Response  
ch35-Resp       Breathing  
ch36-ECG       Heart Rate  

The Opera House Studio and Sound System

The Sydney Opera House Studio is an intimate, flexible space designed primarily for new music and contemporary performance. The seating capacity ranges from 220 to 318, depending on the configuration. The floor area is approximately 15m x 15m, within which flexible tiered seating banks and cabaret-style seating may be arranged. There are two rows of fixed seating on each of the four sides of the gallery. There is a powered overhead grid for hanging speakers with cabling points that connect to a 32 channel mixing console. Layout plans and technical specifications of the Studio are available from

The speaker array consists of 10 speakers placed at ear level, 4 speakers placed at 5.8 m above floor and one zenith speaker placed at 6.8m above the centre of the room, as shown below.

Here are the polar coordinates for each speaker from the centre of the room:

And here are the speaker layers (ear level, upper level and zenith):


Submissions need to be received by 6 April 2004 to allow for review and selection. Submissions are open to everyone, and will be reviewed by an international panel. The panel will select ten pieces for the concert, audio CD and booklet.

Submissions should consist of a description document and accompanying soundfiles. The description document should have a name made up from the surnames of the contributors, e.g. SmithBrownJones.pdf. The document should be in PDF format laid out according to the template at The document can be up to 4 pages long and must include the title of the piece, names and affiliations of contributors, a description of the mapping used to sonify the data, and a list of accompanying soundfiles.


Here are the different possible approaches for submitting your sonification:

  1. Multiple mono wav files

    You can provide from 1 to 36 wav files with the coordinates of their respective positions. The Lake Huron system will be used to spatialise these sound sources onto the 15-speaker array in the Opera House. The virtual position for each soundfile can be specified in hemi-spherical coordinates (Distance, Angle, Elevation) as shown below:

    Radius from centre in the normalised range 0.0 to 1.0.
    Angle in degrees anticlockwise from front with range 0 to 360.
    Elevation in degrees from the floor with range 0 to 90.
    For example - Soundfile4.wav = (1.0, 45, 54).

    The locations can also be specified in terms of the 10-20 EEG system described in the Data section. For example - Soundfile4.wav = (F3) would place the soundfile at the Front Left location of the F3 sensor on the scalp. This is equivalent to Soundfile4.wav = (1.0, 45, 54).

    The locations can also be specified according to the speaker layouts in standard setups for Mono, Stereo, Quad, Octal, Surround 4.1, and Surround 6.1 provided that you clearly identify each wav file:

  • Soudfile1.wav = front left
  • Soundfile2.wav = back left .

Please note that sound source positions are fixed and cannot be assigned trajectories. This approach has the advantage that no spatialisation knowledge is required but has the disadvantage that sound source movements cannot be rendered.

  1. Speaker signals

    You can provide the direct signals to be fed to each speaker, which allows you to perform your own spatialisation and create dynamic movements between speakers. In this case, you must provide 15 wav files, one for each of the 15 speakers, using the speaker numbering convention, as below.

    For example:

  • soundfile1.wav = speaker 1
  • soundfile2.wav = speaker 2 etc..

Channel 16 which is the subwoofer channel can be left out unless you want to provide your own subwoofer channel.

  1. Ambisonics B-format

    You can provide a spatialised sonification encoded in the well-defined B-format. The Lake Huron will be used to decode the B-format to the custom speaker array.

    • Soundfile1.wav = W channel
    • Soundfile2.wav = X channel
    • SoundfileY.wav Y channel
    • SoundfileZ.wav Z channel

In all cases, the soundfiles must be individual 16 bit PCM mono .wav format at 44.1 kHz. The soundfiles should have the same name as the description document with an additional unique ID in the range 01-36 for each e.g. SmithBrownJones01.wav, SmithBrownJones02.wav, . SmithBrownJones16.wav. The Lake Huron system will be used to mix the Soundfiles to a binaural form so that the selection panel can review the pieces through headphones.

Further enquiries can be emailed to with the subject line

  • Listening to the Mind Listening.

For discussions please email the ICAD list at

Electronic submissions can be uploaded by ftp to

CD-ROM submissions can be sent by post to:

Stephen Barrass
Listening to the Mind Listening
CSIRO ICT Centre, GPO Box 664
Canberra ACT, Australia 2601

FTP Note

You do not need to link your soundfiles into the description document, just make sure to use the naming convention. It is simplest if you upload your files in a single zipped archive file.

You can upload your Listeniong to the Mind Listening submission by anonymous ftp to

Please use your email address as the password.

The ftp site is not readable so you will not be able to see yours, or anyone elses, submissions that have been uploaded.


Editors: Stephen Barrass & Paul Vickers
Published by the International Community for Auditory Display (ICAD).
CD-ROM I.S.B.N: 1-74108-048-7     Website I.S.B.N: 1-74108-062-2

Additional ICAD information and publications can be found at

Copyright © 2004 by the ICAD contributors.
All rights reserved. Copyright remains with the individual authors. No part of this publication can be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording, or otherwise without prior written permission of the individual authors.

Created: 19-Aug-2003     Last modified: 30 June 2004