A team of students led by statistics professor Jie Ding from the University of Minnesota will develop algorithms to recognize human emotions (e.g. calm, happy, angry, etc.) from audio speech data, and to incorporate new emotions into an existent speech. By applying machine learning techniques to various speech datasets, students will identify features of human speech that can represent emotions, to develop software to perform emotion recognition, and to synthesize emotional speech data. Students also have the opportunity to create their own dataset, and apply their developed methods for training and testing. This work will allow further research along the direction of speech emotion analysis, and may result in new designs of human-computer interfaces.
Project Manager: Enmao Diao