Real-time Visual Feedback for Retraining Tongue Movements for Speech

  • Dr. William Katz (left) - The interactive device is positioned like a shower head above the patient, and sensors are placed on the person's tongue
  • Patients view 3D images of their own tongue movements on a computer screen while they're speaking
  • (L-R) Dr. Jun Wang and Eric Farrar

Project Overview

Have you ever thought about where you put your tongue to make a speech sound? Most people never have to consider this because it happens so naturally. But for speakers with neurologically based speech disorders, this movement can be daunting.

The purpose of the Visible Speech project is to develop an easy-to-use clinical tool that provides visual feedback showing the position of the tongue in the mouth during speech. Single or multiple sensors placed on the patient’s tongue allow for real-time monitoring of movement. The aim of the project is to facilitate successful speech production for individuals who otherwise are unable to achieve accurate production of those sounds.

In addition to this direct clinical application, the tool will enable researchers to gather data that provides greater insight into speech processes. The research will lead to further understanding of the importance of the tongue in human speech production. The Visible Speech tool provides the possibility for understanding the intricacy of tongue patterns that impact normal and disordered speech.

Stage of Development

The Visible Speech project is in the pilot stage of development. A grant has been submitted to the National Institute on Deafness and Other Communication Disorders in support of this project.

Development Team

The Visible Speech project is a collaboration among the UT Dallas School of Behavioral and Brain Sciences, the Callier Center for Communication Disorders, the Erik Jonsson School of Engineering and Computer Science and the School of Arts and Humanities, as well as corporate sponsors. The team of researchers includes:

Thomas Campbell, PhD

Campbell is the Sara T. Martineau Professor in Communication Disorders in the School of Behavioral and Brain Sciences. He holds the Ludwig Michael Executive Directorship of the Callier Center for Communication Disorders. His research interests focus on early predictors of speech and language disorders in children as well as the identification of speech-motor and environmental variables that are associated with the recovery of communication skills after acquired neurological injury in childhood.

Robert Rennaker, PhD

An associate professor in neural engineering, Rennaker is involved in development of neural interface systems. His other research focus is systems level neuroscience. Additional interests include auditory neuroscience, plasticity and attention. He is acting director of the Texas Biomedical Device Center at UT Dallas.

William Katz, PhD

Katz is a professor of communication sciences and disorders. He studies language and the brain, including speech and language breakdown in adult aphasia and apraxia. He has researched coarticulation, child speech production and cue-trading relations at the prosody/syntax interface. His recent work focuses on the role of visual feedback in speech production, including applications for adult neurogenic patients and second language acquisition.

Eric Farrar, MFA

Farrar is an assistant professor in the Arts and Technology program in the School of Arts and Humanities. His professional experience is in 3D animation for feature films with a specialization in character rigging, creating internal structures and control systems that allow virtual 3D models to be animated. His research interests range from the use of computer imagery for scientific visualization to the cognitive processes of integrating music and animation.

Balakrishnan Prabhakaran, PhD

Prabhakaran is a professor of computer science, specializing in multimedia systems. He is focusing on video and health-care data analytics; streaming of 3D video, animations, and deformable 3D models; content protection and authentication of multimedia objects; quality of service (QoS) guarantees for streaming multimedia data in wireless ad hoc and mesh networks; and collaborative virtual environments. In the past, he has worked on multimedia databases, authoring and presentation, resource management and scalable web-based multimedia presentation servers.