How can we make speech screening more accessible to parents and healthcare professionals who work with children? This question spurred the development of the Automated Transcription Project, which uses technology to gather information about children’s speech production abilities in an easy-to-use format.
Research for this project focuses on 1) building a computer system that can transcribe speech sounds when presented in single words, 2) determining if automated, computerized transcription of children’s speech can reach the accuracy levels of manual transcription and 3) developing this method to support or replace time-consuming hand transcription, the only option available to clinicians today.
For the future, this research will help in developing an automated application for a hand-held device, such as an iPhone, for use in speech screening. Parents and healthcare professionals could access this tool to facilitate early identification of possible speech sound production problems and to initiate referral for full evaluation by a speech-language pathologist.
Stage of Development
The Automated Transcription team has developed a prototype for collecting speech data in the field using a hand-held device. We are currently testing the reliability of the device for collecting speech production data in the field. This project is funded through the Callier Center Small Grant Project.
The Automated Transcription project is a collaboration among the School of Behavioral and Brain Sciences, the Callier Center for Communication Disorders and the Erik Jonsson School of Engineering and Computer Science. The team of researchers includes:
Thomas Campbell, PhD firstname.lastname@example.org
Campbell is the Sara T. Martineau Professor in Communication Disorders in the School of Behavioral and Brain Sciences. He holds the Ludwig Michael Executive Directorship of the Callier Center for Communication Disorders. His research interests focus on early predictors of speech and language disorders in children as well as the identification of speech-motor and environmental variables that are associated with the recovery of communication skills after acquired neurological injury in childhood.
John Hansen, PhD email@example.com
Hansen is associate dean for research for the Erik Jonsson School of Engineering and Computer Science and Distinguished Chair in Telecommunications. His research interests span the areas of digital speech processing, analysis and modeling of speech and speaker traits, speech pathology and voice assessment, speech enhancement and feature estimation in noise. His current emphasis is on robust recognition and training methods for spoken document retrieval and recognition in accent, noise, stress, and Lombard effect, and speech-feature enhancement in hands-free environments for human-computer interaction.
Abhijeet Sangwan, PhD firstname.lastname@example.org
Sangwan received his PhD in electrical engineering from The University of Texas at Dallas in 2009. His research interests include automatic speech recognition, language recognition, accent analysis and robust speech signal processing. He is particularly interested in applications of speech and language technology in automatic assessment of spoken language, and speech analysis of long duration audio recordings.
Jenny McGlothlin, MS email@example.com
McGlothlin has a master's degree in speech-language pathology and has worked as a speech therapist and clinical supervisor for the last 12 years. Her clinical work focuses on evaluation and treatment of children with feeding and speech disorders, specifically those disorders with a motor component. Research interests include differential diagnosis of motor speech disorders in children, as well as development of innovative diagnostic and treatment methods.