PROJECT TITLE :
This paper describes a substantial effort to create a true-time interactive multimodal dialogue system with a specialise in emotional and nonverbal interaction capabilities. The work is motivated by the aim to supply technology with competences in perceiving and manufacturing the emotional and nonverbal behaviors needed to sustain a conversational dialogue. We have a tendency to gift the Sensitive Artificial Listener (SAL) scenario as a setting which seems notably fitted to the study of emotional and nonverbal behavior since it requires solely very restricted verbal understanding on the part of the machine. This situation permits us to concentrate on nonverbal capabilities while not having to deal with at the identical time the challenges of spoken language understanding, task modeling, etc. We tend to 1st report on three prototype versions of the SAL scenario in which the behavior of the Sensitive Artificial Listener characters make up my mind by somebody's operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to gather knowledge required for building system parts for analyzing and synthesizing the respective behaviors. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behavior, dialogue management, and synthesis of speaker and listener behavior of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-kind systems. Since the system is intended for modularity and reuse and since it's publicly accessible, the SAL system has potential as a joint research tool in the affective computing research community.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here