University of Minnesota
School of Physics & Astronomy

Spotlight

Now Hear this! Audio Templates

John Broadhurst
John Broadhurst
Alex Schumann
                                                       

We compare every sound that we hear against the sounds that we have already heard and are stored in our memory.

Our neural signal processing research is looking at answers to these questions: How are sounds stored? How are sounds retrieved? How are sounds compared to the sound now present at the person’s ear? We can not simply store the sound wave (like a CD), because if we did, we would fill all our usable memory in a few weeks after birth. We extract a template of information from a sound, store it and are later able to compare it, along with millions of other templates, with the new incoming sounds. This explains why we can pick up the telephone and immediately recognize who is calling, and be able to say “Oh hello Jack or Jill” We have searched through the templates of the voices of people we know in the half second it takes us to hear the voice and reply. Once we have identified a sound, or several sounds, we keep the “match” in local memory for a few seconds. We then test each new sound against it before trying a new full search of possibilities.

Our studies look at the processes the brain uses in constructing a template to be stored in memory. We introduce many repetitions of the same sound into a subject’s ear over a period of about 15 minutes. Intermixed with these repetitions are occasional deviant sounds in which some attribute has been changed. The sound used is a computer generated synthetic wave similar to that produced by a stringed musical instrument. Such a sound consists of an onset period in which the body of the instrument first vibrates freely, followed by a sustained period as the string forces the vibration of the instrument soundboard or other resonator. During the onset period, we present a combination of five waves with non-harmonically related pitches. Occasionally one of the intermediate waves of this combination is omitted, the remainder of the sound being unaltered. The resultant brain processing implies that the template analysis is very dependant on extracting each wave from the overall sound.

Using a superconducting magnetometer array, we have been able to locate when the brain cells that hold the brain’s “mismatch” processor are active and their location. The processor is shown on as a white dot superimposed on an MRI scan of a subjects head. We can locate the active cells within about 1/8 of an inch in all three directions and the time of activation to 2/100 of a second. We now know where and when the brain compares auditory templates. Our next research steps are to determine HOW the brain compares auditory templates. This research will help determine if a hearing problem is physical or physiological, and will help in producing synthetic sound alerts which .we can instantly recognize.

This work being done by Professor J.H Broadhurst and K. Ghanbeigi will be presented at the international conference on Biomagnetism held in Sapporo Japan in August 2008.