Complete Program

Tuesday, June 20

9:00 - 1:00 Workshops
Takahiko TsuchiyaLive Coding Sonification System for Web Browsers with Data-to-Music API
Location: 210 IST Building
Matthew Neal and Nicholas OrtegaTutorial on Higher Order Ambisonics and demonstration of the Auralization and Reproduction of Acoustic Sound-fields (AURAS) facility at Penn State
Location: 30 Hammond Building
Myounghoon Jeon, S. Maryam FakhrHosseini, Eric VaseyNew Opportunities for Auditory Interactions in Highly Automated Vehicles
Location: 201 IST Building
2:00 Opening Welcome
IST Building Cybertorium
3:00 - 4:35: Paper Session 1 - Language
IST Building Cybertorium
Thomas Gable, Brianna Tomlinson, Stanley Cantrell and Bruce Walker
Spindex and Spearcons in Mandarin: Auditory Menu Enhancements Successful in a Tonal LanguageAuditory displays have been used extensively to enhance visual menus across diverse settings for various reasons. While standard auditory displays can be effective and help users across these settings, standard auditory displays often consist of text to speech cues which can be time intensive to use. Advanced auditory cues including spindex and spearcon cues have been developed to help address this slow feedback issue. While these cues are most often used in English, they have also been applied to other languages, but research on using them in tonal languages, which may affect the ability to use them, is lacking. The current research investigated the use of spindex and spearcon cues in the language of Mandarin to determine their effectiveness in a tonal language. The results suggest that the cues can be effectively applied and used in tonal languages by untrained novices, opening the door to future use of the cues in such languages.
Michael Nees, Joanna Harris and Peri Leong
How do people think they remember melodies and timbres? Phenomenological reports of memory for nonverbal soundsMemory for nonverbal sounds such as those used in sonifications has been recognized as a priority for cognitive-perceptual research in the field of auditory display. Yet memory processes for nonverbal sounds are not well understood, and existing theory and research have not provided a consensus on mechanisms of memory for nonverbal sounds. We report an analysis of a qualitative question that asked participants to report the strategy they used to retain nonverbal sounds—both melodies and sound discriminable only by timbre. Results suggested that auditory strategies were common across both types of sounds but were more commonly reported for remembering melodies. Motor strategies were also more frequently reported for remembering melodies. Both verbal labeling of sounds and associative strategies—linking the sounds to existing information in memory—were more commonly reported as strategies for remembering timbre. Implications for theory and future research are discussed.
Areti Andreopoulou and Visda Goudarzi
Reflections on the Representation of Women in the International Conferences on Auditory Displays (ICAD)This paper investigates the representation of women researchers and artists in the conferences of the International Community for Auditory Display (ICAD). This topic was approached through the study of publication and authorship patterns of female researchers in ICAD conferences. Temporal analysis showed that the percentage of unique female authors published remained in relatively unchanged levels (mean=17.9%) throughout the history of ICAD conferences. This level, even though low, remains within the reported percentages of female representation in other communities with related disciplines and significantly higher than in audio communities with a more technical orientation.
Daniel Verona and Camille Peres
A Comparison Between the Efficacy of Task-Based vs. Data-Based sEMG Sonification DesignsThis research focuses on sEMG sonification and two sEMG data analysis tasks: determining which of two muscles contracts first and which of two muscles exhibits a higher exertion level. A type of hierarchical task analysis known as GOMS (Goals, Operators, Methods, Selection Rules) was performed for both of these tasks and two sonification designs were created based on the results of these task analyses. Two data-based sEMG sonification designs were then taken from the sEMG sonification literature, and the four designs (2 task-based and 2 data-based) are being empirically compared. We expect to find more accurate listener performance with the task-based (TB) designs than with the data-based designs.
Stephen Taylor
From Program Music to Sonification: Representation and the Evolution of Music and LanguageThe emerging field of bio-musicology and research into the origins of music and language can shed new light on musical representation, including program music and more recent incarnations such as data sonification. Although sonification and program music have different aims—one scientific explication, the other artistic expression—similar techniques—both are underlied by reliance on human and animal biology, cognition, and culture. Links between musicality and representation—dimensions like high-low, long-short, near-far, etc., bridging the real and abstract—can prove useful for theorists, sound designers, and composers.
5:00 - 6:00 Opening Reception
IST Building Cafe
6:00 - 7:00 Keynote 1: Carla Scaletti
IST Building Cybertorium

Wednesday, June 21

9:00 - 10:20: Paper Session 2 - Movement
IST Building Cybertorium
Joseph Newbold, Nicolas Gold and Nadia Bianchi-Berthouze
Musical Expectancy in Squat Sonification for People who Struggle with Physical ActivityPhysical activity is important in maintaining a healthy lifestyle. However, it can be hard for people to engage in physical exercise and one's struggle can often lead to avoidance of such activity, despite its benefits. We investigate the role of music expectancy as a way to leverage people's implicit and embodied understanding of music within a real-time sonification of movement to provide information on technique while also motivating continuation of movement and rewarding its completion. The paper presents two studies showing how this musically-informed sonification can be used to support the squat movement.
Jon Bellona, Luke Dahl, Amy LaViers, Lin Bai
Empirically Informed Sound Synthesis Application for Enhancing the Perception of Expressive Robotic MovementSince people communicate intentions through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed “expressive.” However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents an application for synthesizing sounds that match various movement qualities.
Jason Sterkenburg, Steven Landry and Myounghoon Jeon
Influences of Visual and Auditory Displays on Aimed Movements Using Air Gesture ControlsWith the proliferation of technologies operated via in-air hand movements, e.g. virtual/augmented reality, in-vehicle infotainment systems, and large public information displays, there remains an open question whether auditory displays can be used effectively to facilitate eyes-free aimed movements. We conducted a within-subjects study, similar to a Fitts paradigm study, in which 24 participants completed simple aimed movements to acquire targets of varying sizes and distances. The results highlight the potential for auditory displays to aid aimed movements using air gestures in conditions where visual displays are impractical, impossible, or unhelpful.
Juliana Cherston and Joseph A. Paradiso
Rotator: Flexible Distribution of Data Across Sensory Channels'Rotator' is a web-based multisensory analysis interface that enables users to shift streams of multichannel scientific data between their auditory and visual sensory channels in order to better discern structure and anomaly in the data. This paper provides a technical overview of the Rotator tool as well as a discussion of the motivations for integrating flexible data display into future analysis and monitoring frameworks. An audio-visual presentation mode in which only a single stream is visualized at any given moment is identified as a particularly promising alternative to a purely visual information display mode.
11:00 - 12:00 Keynote 2: Elizabeth Cohen
IST Building Cybertorium
1:00 - 2:20 Paper Session 3 - Navigation & Noise
IST Building Cybertorium
Joseph Schlesinger, Brittany Sweyer, Alyna Pradhan and Elizabeth Reynolds
Frequency-Selective Silencing Device for Digital Filtering of Audible Medical Alarm Sounds to Enhance ICU Patient RecoveryFree-field auditory medical alarms, although widely present in intensive care units, have created a number of hazards for both patients and clinicians in this environment. This device, through the use of a Raspberry Pi and digital filters, removes the alarm sounds present in the environment while transmitting other sounds to the patient without distortion. This allows for patients to hear everything occurring around them and to communicate effectively without experiencing the negative consequences of audible alarms.
Ruta Sardesai, Thomas Gable and Bruce Walker
Introducing Multimodal Sliding Index: Qualitative Feedback and Perceived Workload for Auditory Enhanced Menu Navigation MethodUsing auditory menus on a mobile device has been studied in depth with standard flicking, as well as wheeling and tapping interactions. Here, we introduce and evaluate a new type of interaction with auditory menus, intended to speed up movement through a list.
Woodbury Shortridge, Thomas Gable, Brittany Noah and Bruce Walker
Auditory and Head-Up Displays for Eco-Driving InterfacesEco-driving describes a strategy for operating a vehicle in a fuel-efficient manner. Current research shows that visual eco-driving interfaces can reduce fuel consumption by shaping motorists’ driving behavior but may hinder safe driving performance. The present study aimed to generate insights and direction for design iterations of auditory eco-driving displays and a potential matching head-up visual display to minimize the negative effects of using purely visual head-down eco-driving displays.
Kees van den Doel and Michael Robinson
Use of sonification of RADAR data for noise controlDeep sounding radar surveys for geophysical exploration requires the detection of faint reflections from deep subsurface structures. Signal to noise enhancement through extensive data stacking is effective provided the data noise is incoherent and time-invariant. We describe the use of sonification of radar data for quality control of peripheral equipment, specifically to detect unwanted noise with a temporal pattern. A small user study was performed to quantify variations in individual performance in detecting these.
2:30 - 3:00 Open session with ICAD Board
3:00 - 5:30 ICAD Board Meeting
Room 121 Borland Building
2:30 - 5:30 Sonification Do-a-thon
Room 113 Borland Building
5:30 - 7:00 Poster Session
Playhouse Theatre Lobby
Marlene Mathew
BSONIQ: A 3-D EEG Sound InstallationBSoniq is a multi-channel interactive bio-feedback installation which allows for real-time sonification and visualization of electroencephalogram (EEG) data. EEG data provides multivariate information about human brain activity. Here, a multivariate event-based sonification is proposed using 3D spatial location to provide cues about these particular events.
Yuanjing Sun, Jaclyn Barnes and Myounghoon Jeon
Multisensory Cue Congruency in the Lane Change TestDrivers interact with a number of systems while driving. Taking advantage of multiple modalities can reduce the cognitive effort of information processing and facilitate multitasking. The present study aims to investigate how and when auditory cues improve driver responses to a visual target. Results are discussed along with theoretical issues and future works.
Marcelo Ferranti and Rejane Spitz
Sounding objects: an overview towards sound methods and techniques to explore sound within design processesIn this paper we first illustrate the importance of the design thinking process by showing two main approaches of design thinking: double-diamond and human-centered design. We then present a literature review on sound methods and techniques. Finally we match those findings with classic design methods, like personas, scenarios and experience maps to get twenty key/essential sound methods that could be applied in a design thinking context.
Peter Coppin, David Steinman, Daniel MacDonald and Richard Windeyer
Progress Toward Sonifying Napoleon’s March and Fluid Flow Simulations through Binaural HorizonsAccessible data analytics—that which can be rendered for experience through vision, hearing, and touch—poses a fundamental challenge to designers. Because human hearing is optimized for detecting locations on a horizontal plane, our approach recruits this optimization by employing an immersive binaural horizontal plane. Two case studies demonstrate our approach: A sonic transcreation of a map and a sonic transcreation of a computational fluid dynamics simulation.
Paulo Marins
Challenges and Constraints of Using Audio in Online Music EducationSeveral online music courses have been developed lately by educational companies. In addition, many universities have been offering music online degree programs. Since these courses and programs are taught through distance education, many ICTs are used such as: recorded video, online software, social networks, and audio. Audio is widely used in the online courses and degree programs that aim to teach applied music. This paper intends to clarify – through a literature review - some questions concerning this use and also aims to provide a discussion regarding the challenges and constraints of using audio in online applied music lessons.
Wenyu Wu, Alexander Gotsis, Rudina Morina, Harsha Chivukula, Arley Schenker, Madeline Gardner, Felix Liu, Spencer Barton, Steven Woyach, Bruno Sinopoli, Pulkit Grover and Laurie Heller
Echoexplorer: A Game App for Understanding Echolocation and Learning to Navigate Using Echo CuesEcholocation -- the ability to detect objects in space through the perception of echoes from these object -- has been identified as a promising venue to help visually impaired individuals navigate within their environments. We designed a game-application that serves as a training platform for individuals, sighted or not, to train themselves to echolocate. While the game-app is currently being tested on several individuals, this paper reports the process of app design, the design decisions, and how feedback from visually impaired individuals influenced these.
7:00 - 8:00 Concert
Playhouse Theatre
Stephen RoddySonification: The Good Ship Hibernia (audio)
Alfredo ArdiaRami (video)
Julius BucsisPortraits of Nine Revolving Celestial Spheres (audio)
Roberto ZanataAfter Images (video)
Antonio D'AmatoKörper (multichannel audio)
James Cave and Ben EyesEonsounds: Fiamignano Gorge (voice and tape)
Andrew LittsSingularity for trumpet and electronics

Thursday, June 22

9:00 - 10:20 - Paper Session 4: Ecology
IST Building Cybertorium
Josh Laughner, Elliot Canfield-Dafilou
Illustrating trends in nitrogen oxides across the United States using sonificationWithout oversimplifying the data, this project presents a sonification tool for exploring NO_2 and O_3 data from the Berkeley High Resolution (BEHR) Ozone Monitoring Instrument (OMI) and OMO3PR ozone profile datasets. By allowing the listener control over the data-to-sound mapping and synthesis parameters, one can experience and learn about the interplay between NO_2 and O_3 concentrations. Furthermore, interannual and seasonal trends can be perceived across different types of locations.
Brianna Tomlinson, R. Michael Winters, Chris Latina, Smruthi Bhat, Milap Rane and Bruce Walker
Solar System Sonification: Exploring Earth and Its Neighbors through SoundInformal learning environments (ILEs) like museums often incorporate multi-modal displays into their exhibits as a way to engage a wider group of visitors, often relying on tactile, audio, and visual means to accomplish this. We designed an auditory-only model for the Solar System and created a planetarium show, presented at a local science center. Attendees evaluated the performance on helpfulness, interest, pleasantness, understandability, and relatability of the sounds mappings. Overall, attendees rated the solar system and planetary details very highly, in addition to providing open-ended responses about their entire experience.
Kelly Fox, Jeremy Stewart and Rob Hamilton
MADBPM: Musical and Auditory Display for Biological Predictive Modeling The modeling of biological data can be carried out using structured sound and musical process in conjunction with integrated visualizations. With a future goal of improving the speed and accuracy of techniques currently in use for the production of synthetic high value chemicals, the madBPM project couples real-time audio synthesis and visual rendering with a highly flexible data-ingestion engine. Each component of the madBPM system is modular, allowing for customization of audio, visual and data-based processing.
Arthur Pate, Benjamin Holtzman, John Paisley, Felix Waldhauser and Douglas Repetto
Pattern Analysis in Seismic Data Using Human and Machine Listening Extraction of heat from the Earth’s crust is used to generate electricity with no carbon dioxide production. This “geothermal energy” requires permeable pathways of fracture networks for water to move through hot rock and pick up its heat. The injection and movement of water and steam through the crustal reservoir can cause earthquakes. Seismologists are facing the challenge of identifying, understanding and controlling these fracture processes in order to maximize heat extraction and minimize induced seismicity. Our assumption is that each fracture process is characterized by spectro-temporal features and patterns that are not picked up by current signal processing methods used in seismology, but can be identified by the human auditory system and/or by machine learning.
11:00 - 12:00 - Paper Session 5: Games
IST Building Cybertorium
James Broderick, Jim Duggan and Sam Redfern
Using Auditory Display Techniques to Enhance Decision Making and Perceive Changing Environmental Data within a 3D Virtual Game Environment Video games have strived towards powerful sound design, both for player immersion and information perception. Research exists showing how we can use audio sources and waypoints to navigate environments, and how we can perceive information from audio in our surroundings. This research explores using sonification of changing environmental data and environmental objects to improve user's navigation within simulated environments, both for training and for remote operation of unmanned vehicles.
Adrian Jäger and Aristotelis Hadjakos
Navigation in an audio-only first person adventure game Navigation in audio-only first person adventure games is challenging since the users have to rely exclusively on their sense of hearing to localize game objects and navigate in the virtual world. In this paper we report observations that we made during the iterative design process for such a game and the results of the final evaluation.
Laurie Heller, Arley Schenker, Pulkit Grover, Madeline Gardner and Felix Liu
Evaluating two ways to train sensitivity to echoes to improve echolocation We attempted to train sighted individuals to pay attention to information in echoes in order to improve their echolocation ability. We evaluated two training techniques that involved artificially generated sounds. Both artificial techniques were evaluated by their effect on natural echolocation of real objects with self-generated clicks. The lab training was labor intensive whereas the app training was self-guided and convenient. This has implications for training methods aimed at echolocation that might ultimately be useful for navigation.
1:30 - 2:50 - Paper Session 6: Philosophy/Aesthetics
IST Building Cybertorium
Takahiko Tsuchiya and Jason Freeman
Spectral Parameter Encoding: Towards a Framework for Functional-Aesthetic Sonification While functional designs tend to reduce musical expressivity for the fidelity of data, aesthetic or musical sound organization arguably has a potential for representing multi-dimensional or hierarchical data structure with enhanced perceptibility. Existing musical designs, however, generally employ nonlinear or interpretive mappings that hinder the assessment of functionality. The authors propose a framework for designing expressive and complex sonification using small timescale musical hierarchies while maintaining data fidelity by ensuring a close- to-the-original recovery of the encoded data utilizing descriptive analysis by a machine listener.
Teresa Marie Connors
Organizing for Emergence: sonification as a co-creative device In this paper, I offer a perspective into a creative research practice that contextualizes the use of computer vision data and sonification as a process of organized emergent. I propose a series of thinking-in-the-making moves that considers the data-body as a performative apparatus.
Parisa Alirezaee, Roger Girgis, Taeyong Kim, Joseph Schlesinger and Jeremy Cooperstock
Did you feel that? Developing Novel Multimodal Alarms for High Consequence Clinical Environments Hospitals are overwhelmingly filled with sounds produced by alarms and patient monitoring devices. Consequently, these sounds create a fatiguing and stressful environment for both patients and clinicians. As an attempt to attenuate the auditory sensory overload, we propose the use of a multimodal alarm system in operating rooms and intensive care units. The results obtained from pilot testing support this hypothesis. We conclude that further investigation of this method can prove useful in reducing the sound exposure level in hospitals as well as personalizing the perception and type of the alarm for clinicians.
Steven Landry and Myounghoon Jeon
Participatory Design Research Methodologies: A Case Study in Dancer Sonification Given that embodied interaction is widespread in Human-Computer Interaction, interests on the importance of body movements and emotions are gradually increasing. The present paper describes our process of designing and testing a dancer sonification system using a participatory design research methodology. The end goal of the dancer sonification project is to have dancers generate aesthetically pleasing music in real-time based on their dance gestures. This paper focuses on the methods we used to identify, select, and test the most appropriate motion to sound mappings for a dancer sonification system.
3:00 - 4:30 - Workshop on Issues of Diversity, Equity and Inclusion
201 IST Building
6:00 - 8:00 - Banquet
IST Building Cafe

Friday June 23

9:30 - 10:45 - Paper Session 7: Computing
IST Building Cybertorium
David Worrall
Computational Designing for Auditory Displays Approaching the design of auditory displays as computational design problems poses both considerable challenges and opportunities. The intellectual foundations of the nature of computational designing rest at the confluence of multiple fields ranging from mathematics, computer science and systems science to biology, perception, social science, musical orchestration and philosophy. This paper outlines the fundamental concepts of computational design thinking based on seminal ideas from these fields and explores how they it might be applied to the construction of models for synthesizing auditory scenes or environments.
Jiajun Yang and Thomas Hermann
Parallel Computing of Particle Trajectory Sonification to Enable Real-Time Interactivity Model-Based Sonification (MBS) is a technique to sonify data based on the data's inherent structure. In contrast to Parameter Mapping Sonification, in MBS the data serves to define dynamical systems (alike physical models) which users can excite interactively, in turn receiving the system response as the auditory representation. The computation cost of the sonification is large and not suitable for real-time interaction. We revisit the Particle Trajectory Sonification (PTS), first introduced in 1999. We sped up the computation of the PTS model with data optimization and parallel computing.
Brian Bartling and Catherine Psarakis
Simulation Products and the Multi-Sensory Interactive Periodic Table The Multi-Sensory Interactive Periodic Table (MSIPT) is described as a simulation product for the perceptualization of electron configurations, atomic radii, orbital structures, and chemical bonds of the elements comprising the periodic table. A brief overview of the frameworks and possibilities inherent within this approach is addressed, followed by a discussion of MSIPT. It is concluded that a simulation product provides a robust and self-contained method for communicating multi-faceted structures through sound.
Samuel Chabot and Jonas Braasch
An Immersive Virtual Environment for Congruent Audio-Visual Spatialized Data Sonifications Oftentimes, spatialized data sets are meant to be experienced by a single or few users at a time. Projects at Rensselaer's Collaborative-Research Augmented Immersive Virtual Environment Laboratory allow even large groups of collaborators to work within a shared virtual environment system. The loudspeaker array creates a high-spatial density soundfield within which users are able to freely explore due to the virtual elimination of the so-called sweet-spot.
11:00 - 12:00 - Open Mic/Closing
IST Building Cybertorium