Publications

Journal Publications

Yelin Kim, Tolga Soyata, and Reza Feyzi Behnagh. “Towards Emotionally-Aware AI Smart Classroom: Current Issues and Directions for Engineering and Education.” IEEE Access, 2018. doi: 10.1109/ACCESS.2018.2791861
[paper][bib]

Yelin Kim and Emily Mower Provost. “ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion Recognition.” IEEE Transactions on Affective Computing (IEEE TAC ’17), 2017.
[paper][bib]

Yelin Kim and Emily Mower Provost. “Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face.” ACM Transactions on Multimedia Computing, Communications and Applications (ACM TOMM), Special Issue on ACM Multimedia Best Papers. 12:1(article 25), 2015, pp.25:1–25:23.
[paper][bib]

Conference Publications

Haoqi Li, Yelin Kim, Cheng-Hao Kuo, and Shrikanth Narayanan. “Acted vs. Improvised: Domain Adaptation for Elicitation Approaches in Audio-Visual Emotion Recognition.” Interspeech. October, 2021. [paper]

Yelin Kim, Joshua Levy, and Yang Liu. “Speech Sentiment and Customer Satisfaction Estimation in Socialbot Conversations.” Interspeech. October, 2020.
[paper]

Joanna Hong, Hong Joo Lee, Yelin Kim, Yong Man Ro. “Face Tells Detailed Expression: Generating Comprehensive Facial Expression Sentence through Facial Action Units.” 26th International Conference on Multimodal Modeling (MMM). January, 2020.

Sadat Shahriar and Yelin Kim. “Audio-Visual Emotion Forecasting: Characterizing and Predicting Future Emotion Using Deep Learning.” IEEE International Conference on Automatic Face and Gesture Recognition (FG). May, 2019.

Ehab Albadawy and Yelin Kim. “Joint Discrete and Continuous Emotion Prediction Using Ensemble and End-To-End Approaches.” ACM International Conference on Multimodal Interaction (ACM ICMI). October, 2018.
[paper][bib][TensorFlow Code Available!]

Yelin Kim and Jeesun Kim. “Human-Like Emotion Recognition: Multi-Label Learning from Noisy Labeled Audio-Visual Expressive Speech.” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). April, 2018.
[paper][bib]

Jesse Parent and Yelin Kim. “Towards Socially Intelligent HRI Systems: Quantifying Emotional, Social, and Relational Context in Real-World Human Interactions.” The AAAI Fall Symposium Series: Artificial Intelligence for Human-Robot Interaction (AI-HRI). November, 2017.
[paper][bib]

Yelin Kim and Emily Mower Provost. “Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions.” ACM International Conference on Multimodal Interaction (ACM ICMI). November, 2016.
[paper][bib]

John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, and Emily Mower Provost. “Wild Wild Emotion: A Multimodal Ensemble Approach.” ACM International Conference on Multimodal Interaction (ACM ICMI). November, 2016.
[paper][bib]

Yelin Kim and Emily Mower Provost. “Leveraging Inter-rater Agreement for Audio-Visual Emotion Recognition.” Proc. of International Conference on Affective Computing and Intelligent Interaction (ACII). September, 2015, pp. 553-559.
[paper][bib]

Yelin Kim. “Exploring Sources of Variation in Human Behavioral Data: Towards Automatic Audio-Visual Emotion Recognition.” Proc. of International Conference on Affective Computing and Intelligent Interaction (ACII) Doctoral Consortium. September, 2015, pp. 748-753.
[paper][bib]

Yelin Kim, Jixu Chen, Ming-Ching Chang, Xin Wang, Emily Mower Provost, and Siwei Lyu. “Modeling Transition Patterns Between Events for Temporal Human Action Segmentation and Classification.” IEEE International Conference on Automatic Face and Gesture Recognition (FG). May, 2015, pp. 1-8.
[paper][bib] [Patent: U.S. Pub. No.: 2016/0321257 A1, 2016] (Acceptance rate: 12%)

Yelin Kim and Emily Mower Provost. “Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition.” Proceedings of the ACM International Conference on Multimedia (ACM MM). November, 2014, pp. 27-36.
[paper][bib][slides (pdf)] [coverage] (Acceptance rate: 19%)
Best Student Paper Award
Press Release by UMichigan
Press Release by IEEE Computer Society

Yelin Kim, Honglak Lee, and Emily Mower Provost. “Deep Learning for Robust Feature Generation in Audio-Visual Emotion Recognition.” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). May, 2013, pp. 3687-3691.
[paper][bib]

Yelin Kim and Emily Mower Provost. “Emotion Classification via Utterance-Level Dynamics: A Pattern-Based Approach to Characterizing Affective Expressions.” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Canada. May, 2013, pp.3677-3681.
[paper][bib]

Patents

“Systems and Methods For Analyzing Time Series Data Based on Event Transitions.”
U.S. Pub. No.: 2016/0321257 A1, 2016.
Inventors: J. Chen, P. Tu, M.C. Chang, Y. Kim, S. Lyu
Assignee: Morpho Detection, LLC (Newark, CA, US)
Filed: May 1, 2015
Publication Date: November 3, 2016
Academic Publication: Y. Kim et al., FG 2015

Theses

Yelin Kim. “Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.” Ph.D. Thesis, University of Michigan, 2016.
[paper]