The University of Texas at Dallas School of Behavioral and Brain Sciences

Jun Wang

Assistant Professor

Research Interests

Motor speech disorders, silent and dysarthric speech recognition, computational neuroscience for speech production

Curriculum Vitae

Contact

Email: [email protected]
Phone: 972-883-6821
Office: BSB_13.302
Campus Mail Code: BSB11
Website: Speech Disorders & Technology Lab

Biography

Dr. Jun Wang’s research includes silent and dysarthric speech recognition, motor speech disorders due to amyotrophic lateral sclerosis (ALS), and neuroscience for speech production. His work focuses on tongue motion or brain activity patterns during speech production using quantitative and computational approaches (e.g., machine learning). Dr. Wang has identified an optimal set of flesh points on tongue and lips for speech motor control studies and developed the first cross-speaker normalization approaches for speaker-independent silent speech recognition. Dr. Wang’s recent studies showed the possibility of automatically detecting the presence of ALS from speech information and of decoding speech production from non-invasive neural (MEG) signals. Dr. Wang has received an NIH R03 grant, an UT System seed grant for brain research, and a New Century Scholar Award from the American Speech-Language-Hearing Foundation. He has been a co-investigator on several NIH R01 grants and an NIH SBIR grant. Dr. Wang earned his bachelor’s degree from China University of Geosciences (Wuhan), master’s degree from Beijing Institute of Technology, China, and PhD from the University of Nebraska — Lincoln, United States.

Recent and Selected Representative Publications

Recent Articles in Peer-Refereed Journals

Kim, M., Cao, B., Mau, T., & Wang, J. (2017). Speaker-independent silent speech recognition from flesh point articulatory movements using an LSTM neural network, IEEE/ACM Transactions on Audio, Speech, and Language Processing, 25(12): 2323-2336.

Wang, J., Kim, M., Hernandez-Mulero, A. H., Heitzman, D., & Ferrari, P. (2017). Towards decoding speech production from single-trial Magnetoencephalography (MEG) signals, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 3036-3040.

Wang, J., Kothalkar, P. V., Cao, B., Heitzman, D. (2016). Towards automatic detection of amyotrophic lateral sclerosis from speech acoustic and articulatory samples, Interspeech, 1195-1199.

Wang, J., Samal, A., Rong, P., & Green, J. R. (2016). An optimal set of flesh points on tongue and lips for speech movement classification, Journal of Speech, Language, and Hearing Research, 59,15-26.

View more