Speech Processing Study Hits Funding Milestone
NIH Grant Marks 25th Consecutive Year Prof's Work Has Received Support
Oct. 27, 2009
The National Institutes of Health (NIH) recently awarded a $2.18 million grant to Dr. Susan Jerger, the Ashbel Smith professor in the School of Behavioral and Brain Sciences, to support her studies of speech processing and childhood hearing impairment. The grant marks the 25th consecutive year that Jerger’s research has received funding.
“It is very rare for a funding agency to support a line of research for this long. It is testimony to both the groundbreaking discoveries by Dr. Jerger and her colleagues and their continuing insights and creativity,” said Dr. Bert Moore, dean of the school.
Research supported by the upcoming award and the previous five-year award is carried out in collaboration with Dr. Nancy Tye-Murray, research professor of Otolaryngology at Washington University’s School of Medicine in St. Louis.
“I am very excited to have the opportunity to continue with this work,” said Jerger.
Jerger directs the Children’s Speech Processing Lab, a research program that studies the development of children’s listening abilities and their understanding of some basic words. The research is conducted by placing the children in a sound booth and presenting them with a variety of pictures flashed on a computer screen.
As the pictures appear, an audio-visual speech distractor – which begins with a speech sound that is either related or unrelated to the name of the picture - is played through a loudspeaker. Examples of related and unrelated picture-distractor pairs are the picture of a “cat” presented in the presence of the speech distractors “cap” and “bus,” respectively.
The children are asked to name the pictures as quickly as possible. Measuring the speed of naming times lets researchers assess the degree of influence the speech distractors have on a child’s responses.
If the related speech distractors speed up picture-naming relative to the unrelated distractors, then the researchers have evidence that the child appreciates the experimentally built-in relationship, even though children frequently cannot verbalize their grasp of this type of knowledge.
“Even though speech seems to be an auditory signal in the ear, it’s really an audio-visual signal. Seeing and hearing a speaker helps effective communication,” said Jerger.
Jerger and her colleagues have found that children 4 years of age benefit when they are able to look at a speaker’s face and hear the speech at the same time. However, children between age 5 and 9 do not benefit as much, perhaps because they are learning to read and write, as Jerger and her team hypothesized recently in the Journal of Experimental Child Psychology.
Other collaborators in Jerger’s studies exploring children’s speech processing are Drs. Herve Abdi, UT Dallas; Markus Damian, University of Bristol, UK; Nate Marti, University of Texas at Austin; and Melanie Spence, UT Dallas.
“I appreciate the incredible contributions of my colleagues and the students who work in the lab,” said Jerger. “Research is really all about teamwork.”
Parents interested in participating in the Children’s Speech Processing Lab may call (972) 883-4231 or email email@example.com.