Kevin EI Haddad, Yara Rizk, Louise Heron, Nadine Hajj, Yong Zhao, Jaebok Kim, Trung Ngo Trong, Minha Lee, Marwan Doumit, Payton Lin, Yelin Kim, Hüseyin Çakmak
In this work, we established the foundations of a framework with the goal to build an end-to-end naturalistic expressive listening agent. The project was split into modules for recognition of the user{\textquoteright}s paralinguistic and nonverbal expressions, prediction of the agent{\textquoteright}s reactions, synthesis of the agent{\textquoteright}s expressions and data recordings of nonverbal conversation expressions. First, a multimodal multitask deep learning-based emotion classification system was built along with a rule-based visual expression detection system. Then several sequence prediction systems for nonverbal expressions were implemented and compared. Also, an audiovisual concatenation-based synthesis system was implemented. Finally, a naturalistic, dyadic emotional conversation database was collected. We report here the work made for each of these modules and our planned future improvements.
Haddad, KEI, Rizk, Y, Heron, L, Hajj, N, Zhao, Y, Kim, J, Trong, TN, Lee, M, Doumit, M, Lin, P, Kim, Y & Çakmak, H 2018, 'End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions', Journal of Science and Technology of the Arts, vol. 10, no. 2, 2, pp. 49-61. https://doi.org/10.7559/citarj.v10i2.424
Haddad, K. EI., Rizk, Y., Heron, L., Hajj, N., Zhao, Y., Kim, J., Trong, T. N., Lee, M., Doumit, M., Lin, P., Kim, Y., & Çakmak, H. (2018). End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions. Journal of Science and Technology of the Arts, 10(2), 49-61. Article 2. https://doi.org/10.7559/citarj.v10i2.424
@article{fc0a76bfe859484f90409cb5f419c62b,
title = "End-to-End Listening Agent for Audiovisual Emotional and Naturalistic Interactions",
abstract = "In this work, we established the foundations of a framework with the goal to build an end-to-end naturalistic expressive listening agent. The project was split into modules for recognition of the user{\textquoteright}s paralinguistic and nonverbal expressions, prediction of the agent{\textquoteright}s reactions, synthesis of the agent{\textquoteright}s expressions and data recordings of nonverbal conversation expressions. First, a multimodal multitask deep learning-based emotion classification system was built along with a rule-based visual expression detection system. Then several sequence prediction systems for nonverbal expressions were implemented and compared. Also, an audiovisual concatenation-based synthesis system was implemented. Finally, a naturalistic, dyadic emotional conversation database was collected. We report here the work made for each of these modules and our planned future improvements.",
author = "Haddad, {Kevin EI} and Yara Rizk and Louise Heron and Nadine Hajj and Yong Zhao and Jaebok Kim and Trong, {Trung Ngo} and Minha Lee and Marwan Doumit and Payton Lin and Yelin Kim and H{\"u}seyin {\c C}akmak",
year = "2018",
month = nov,
day = "8",
doi = "10.7559/citarj.v10i2.424",
language = "English",
volume = "10",
pages = "49--61",
journal = "Journal of Science and Technology of the Arts",
issn = "2183-0088",
publisher = "Universidade Cat{\'o}lica Portuguesa, Centro de Investiga{\c c}{\~a}o em Ci{\^e}ncia e Tecnologia das Artes",
number = "2",
}