As we navigate our way through life, the eyes and the ears play complementary roles in giving us information about our environment. Yet in research fields, the eyes predominate, as datasets are typically presented through visualization. There is much promise in complementary or alternative realizations presented for the ears, through sonification.
Studying large amounts of data with the ears offers a number of advantages. Small-scale variations may be "magnified" if they are mapped to a quality such as pitch, to which the human auditory system is particularly sensitive. The auditory system is also highly adapted for following multiple streams of information. That is, listeners can readily apprehend a number of simultaneous melodies if they are presented effectively. Thus, sonification is an effective way to display a multitude of signal processing operations simultaneously, with each being represented as a line of counterpoint, a series of chords, or a succession of musical instruments. Indeed, from an informatics perspective, sonification represents an exciting frontier of research methodologies. New ways of gathering information are constantly being created; it is not always clear how useful interpretations can be made from the multitude of information sources and the apparent "data deluge." Finding ways to listen to data offers considerable promise.
This presentation will be a summary of sonification work being carried out at Penn State, on a number of projects, which may include:
- ongoing work in sonification of heart rate variability (published earlier, and summarized at http://www.music.psu.edu/Faculty%20Pages/Ballora/sonification/sonex.html);
- correlations between dynamics of cardiac activity and movement and posture, as explored by Siang Lee Hong at Louisiana State University;
- sonifications of nanomotor movements, a joint project with Penn State's Department of Chemistry;
- sonifications of computer network activity, a joint project with Penn State's College of Information Science and Technology.