Auditory Display and Sonification of Textured Image

Antonio Cesar Germano Martins—Universidade de Sao Paulo
Rangaraj Mandayam Rangayyan—Univeristy of Calgary
Luis Antonio Portela—Universidade de Sao Paulo
Edson Amaro Junior—Universidade de Sao Paulo
Ruggero Andrea Ruschioni—Universidade de Sao Paulo


Introduction

Texture plays an important role in image analysis and understanding with many applications, such as computer vision and medical imaging. In magnetic resonance imaging (MRI), for instance, texture analysis is used to identify diseased tissues (Lersky, Straughan, Schad, Boyce, Bluml & Zuna, 1993). However, with MRI exams of the brain, visual discrimination between healthy tissue and tissue that has undergone infarction or necrosis is difficult. This is due to the well known effect that the surroundings interfere in the way we perceive a region of interest, leading to possible misinterpretations.

Most techniques for texture analysis are oriented towards statistical approaches (Wechler, 1980), which might not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of textured images (Rangayyan, Martins & Ruschioni, 1996), in particular of MRI images, as an adjunct to visual examination of the images. We consider two models for textured images: ordered or (quasi-) periodic texture, where a basic texture element or texton is repeated over the image field; and random texture, which could be modeled as filtered spot noise. MRI images would match the latter model. Our AD and sonification algorithms incorporate analogies between the two types of texture mentioned above, and voiced and unvoiced speech.

Implementation

The AD or sonification procedure consists of mapping projections (or Radon transforms) at a few angles of the image to audible signals, and playing them in sequence. In the case of random texture, modeled as filtered spot noise, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the sonification procedure provides timbre and pitch related to the texton shape and size characteristics and periodicity.

For AD of random textures, we use a linear prediction model to obtain the characteristic coefficients of each projection. Using the model coefficients, we generate extended signals with a larger number of samples. The method assures that we maintain the spectral content of the original signal over the duration of the extended signal. The derivative of the resulting signal for each projection is then taken, scaled to the audible range, and played in sequence with a silent interval between the projections.

For sonification of periodic texture, we first extract the basic element—the texton—using a cepstral (Martins & Rangayyan, 1996) filter, and obtain information about the horizontal and vertical periodicities. Then we use the projections of the individual textons at various angles to create a voiced speech-like signal. By interpolating the non-zero portion of each projection to a width dependent on the horizontal period, and repeating it as many times as necessary, we fill a duration dependent on the width of the texton. Between the sound pattern of one projection and the next, we insert a silent interval dependent on the vertical period. Thus, the sound pattern consists of a serial, melody-like sonification of the pattern of each projection. The mapping of the vertical period to rhythm needs to allow the listener to distinguish between two different periods in the range of about 30 to 60 pixels (for images of size 512x512 pixels); however, the silent period between the sound patterns of two successive projections should not be too long, as it could disturb or distract the listener. In order to meet these two requirements, we found that we need to use a non-linear mapping function. Using an exponential function, we mapped the chosen vertical period range of 30 to 60 pixels to the range 0.2 to 1.0 second, and used linear functions for the other two mappings.

Results

We have tested the AD and sonification algorithms with several synthetic images generated using circles, squares, ellipses, hashes, and triangles of several sizes convolved with a random noise field to obtain random textures, or with an impulse field of varying horizontal and vertical separations to obtain (quasi-) periodic textures.

With the AD method, we drew some information about the filter spots used to generate the random textures. By listening to a sequence of AD patterns of images generated with different spots, we could group them as belonging to the same spot, mainly by the frequency content of the sound. By listening to a sequence of AD patterns of images generated with the same spot but with different sizes, such as circles of radius 6 and 10 pixels, we could organize the images in the order of spot size. We have observed that AD of the derivatives of the projections facilitates better perception of differences between the characteristics of some images.

With the sonification procedure, all the mapped features of the periodic textures tested were readily identified after some training.

We have conducted preliminary tests on MRI images using selected areas corresponding to the gray and white tissues of the brain and to normal and infarcted tissues. By using the AD method, differences between the various tissue types were easily perceived, while visual discrimination of the same areas when placed within their corresponding complete MRI image contexts was difficult.

We are conducting further tests with synthetic images, natural images, and MRI brain images. We believe that our auditory display and sonification approaches will be useful as powerful adjuncts to the visual analysis of textured images.

References

Lersky, R. A., Straughan, K., Schad, L.R., Boyce, D., Blüml, S. & Zuna, I. (1993). MR Image Texture Analysis—An Approach to Tissue Characterization, In Magnetic Resonance, Vol. 11, pp. 873-887.

Wechler, H. (1980). Texture Analysis—A Survey, In Signal Processing, 2, pp. 271-282.

Rangayyan, R.M., Martins, A.C.G. & Ruschioni, R.A. (1996, February). Aural Analysis of Image Texture Via Cepstral Filtering and Sonification, In Proc. SPIE Visual Data Exploration and Analysis III, Vol.2656, pp. 283-294, San Jose, CA.

Martins, A.C.G. & Rangayyan. (1996, May). R.M. Cepstral Filtering and Analysis of Image Texture in the Radon Domain, In Proc. 1996 Canadian Conference on Electrical and Computer Engineering, pp. 466-469, Calgary.

Authors

Project homepage found at:
http://www.lsi.usp.br/~sonifica/sonifica.html

Antonio Cesar Germano Martins
Ruggero Andrea Ruschioni
Laboratório de Sistemas Integráveis (LSI) - Escola Politécnica da Universidade de São Paulo
Av. Prof. Luciano Gualberto, 158, Trav. 3, 05508-900
SÃO PAULO - SP - BRASIL.
Email: amartins@lsi.usp.br, roger@lsi.usp.br

Rangaraj Mandayam Rangayyan
Department of Electrical and Computer Engineering
The University of Calgary, Calgary,
ALBERTA, T2N 1N4, CANADA.
Email:
ranga@enel.ucalgary.ca

Luis Antonio Portela
Edson Amaro Junior
Departamento de Radiologia do Hospital das Clínicas da Universidade de São Paulo
Auditory Display and Sonification of Textured Images