Behavioral Signal Processing, Speech-to-Speech Translation, Multimodal Environments, Signal Processing, Speech and Language Processing, Automatic Speech Recognition, Array Signal Processing, Lower Order Statistics
Panayiotis G. Georgiou received the B.A. and M.Eng. degrees (with Honors) from Cambridge University (Pembroke College), Cambridge, U.K., in 1996, where he was a Cambridge-Commonwealth Scholar, and the M.Sc. and Ph.D. degrees from the University of Southern California (USC), Los Angeles, in 1998 and 2002, respectively.
Since 2003, he has been a member of the Signal Analysis and Interpretation Lab at USC, where he is currently an Assistant Professor. His interests span the fields of multimodal and behavioral signal processing and speech to speech translation. He has worked on and published over 100 papers in the fields of behavioral signal processing, statistical signal processing, alpha stable distributions, speech and multimodal signal processing and interfaces, speech translation, language modeling, immersive sound processing, sound source localization, and speaker identification. He is a senior member of IEEE. He has been a PI and co-PI on federally funded projects notably including the DARPA Transtac “SpeechLinks” and currently the NSF “An Integrated Approach to Creating Enriched Speech Translation Systems” and "Quantitative Observational Practice in Family Studies: The case of reactivity." He is currently on the editorial board of the EURASIP Journal on Audio, Speech, and Music Processing, a guest editor of the Computer Speech And Language journal, the Technical Chair for Interspeech 2016, and a member of the Speech and Language Technical Committee. His current focus is on behavioral signal processing, multimodal environments, and speech-to-speech translation.
Papers co-authored with his students have won best paper awards for analyzing the multimodal behaviors of users in speech-to-speech translation in International Workshop on Multimedia Signal Processing (MMSP) 2006, for automatic classification of married couples’ behavior using audio features in Interspeech 2010, and for analyzing technology-based medical interpretation for cross-language communication in Human-Computer Interaction International (HCII) 2013.
His current focus is on behavioral signal processing, diverse sensing and speech processing from microphone arrays, and speech-to-speech translation.