Workshops

Workshop sessions will be held the morning of Tuesday, June 20, prior to the official opening of the conference.


Takahiko Tsuchiya
Live Coding Sonification System for Web Browsers with Data-to-Music API

Location: 210 IST Building

Today's web environment allows us to build highly accessible web applications for data sonification with powerful real-time audio synthesis. Moreover, in the form of "live coding", the development of a sonification becomes iterative, responsive, and exploratory. In this workshop, the participants will experience web-based live coding of sonification with JavaScript and the Data-to-Music API. First, we cover the basic concepts of web development such as HTML, JavaScript, and file serving (10-20 minutes). We then learn the basic operation of data handling and audio synthesis in the DTM API (20-30 minutes), followed by more advanced techniques for transformation, analytics, and mapping for expressive sonification (20-30 minutes). Then, the participant will have an opportunity to experiment with these technique using a data set of their choice (30 minutes - 1 hour). We conclude the session with presentations from the participants, and discuss a few more applications of DTM such as the communication from the browser to Max/MSP via web sockets or to a DAW software via Web MIDI API (10-20 minutes).



Matthew Neal and Nicholas Ortega, Adviser: Michelle C. Vigeant
Tutorial on Higher Order Ambisonics and demonstration of the Auralization and Reproduction of Acoustic Sound-fields (AURAS) facility at Penn State
Location: 214 Applied Science Building

Ambisonics, originally proposed by Gerzon in the 1970’s, is a technique to reproduce a measured sound field with an array of loudspeakers. Since then, computer and audio technology has seen vast advances, and it is now possible for researchers and individuals to recreate measured sound fields using Ambisonics and Higher Order Ambisonics (HOA). The spherical harmonic components of a sound field can be measured with a spherical microphone array and reproduced using HOA. Commercially available spherical microphone arrays along with the increasing availability of cost-effective multi-channel audio systems make implementing a HOA system quite accessible. This workshop will provide key background information on the acoustic fundamentals of spherical harmonics, spherical microphone arrays and Ambisonics. With the foundations laid, attendees will learn about the Auralization and Reproduction of Acoustic Sound-fields (AURAS) facility at Penn State. The AURAS facility is a 30-loudspeaker HOA array located within an anechoic chamber on Penn State’s campus. Details of the facility’s construction, processing techniques, hardware setup, and software implementations will be presented. Current and future research projects utilizing this facility will also be outlined. After the presentation of the AURAS facility, a demonstration of open-source tools for implementing HOA in MATLAB and Max7 will be provided. Attendees will gain a baseline understanding of how to use these tools in implementing a HOA system. Live demonstrations of the AURAS facility will be included at the end of the workshop for attendees. Both Matthew Neal and Nicholas Ortega are Ph.D. Candidates in the Sound Perception and Room Acoustics Laboratory (SPRAL) out of Penn State's Graduate Program in Acoustics. Working with Dr.. Michelle C. Vigeant, they use spherical array processing techniques with both microphone and loudspeaker arrays in subjective and psychoacoustic testing.


Myounghoon Jeon, S. Maryam Fakhr Hosseini, Eric Vasey
New Opportunities for Auditory Interactions in Highly Automated Vehicles
Location: 201 IST Building

Vehicle automation is becoming more widespread. As automation increases, new opportunities and challenges have also emerged. To name a few, maintaining driver situation awareness, timely cueing take-over-requests, and providing additional cues to pedestrians are all new areas of research. In this workshop we aim to address new opportunities and directions of auditory interactions in highly automated vehicles to provide better driver user experience and to secure road safety.  

We have five explicit goals in our workshop: (1) Provide an organized thinking about the topic of how auditory interactions can be efficiently and effectively applied to highly automated vehicle contexts; (2) Build and nurture a new community that bridges auditory display community with automotive user interface community; (3) Discuss and exchange ideas within and across sub-communities; (4) Suggest promising directions for future transdisciplinary work; and (5) Yield both immediate and long-term community-, research-, and design-guidance products.

To this end, we will invite researchers and practitioners from all backgrounds, who are interested in auditory displays and automotive user interface fields. Achieving these goals will provide an opportunity to move this integrated field forward and build a solid community that includes ICAD.