GET THE APP

Digital companion with human-like emotion and ethics
..

Advances in Robotics & Automation

ISSN: 2168-9695

Open Access

Digital companion with human-like emotion and ethics


6th World Convention on Robots, Autonomous Vehicles and Deep Learning

September 10-11, 2018 Singapore

Soo-Young Lee

Korea Advanced Institute for Science and Technology, Republic of Korea

Keynote: Adv Robot Autom

Abstract :

For the successful interaction between human and digital companion, i.e., machine agents, the digital companions need be able to bind with human-like ethics as well as to make emotional dialogue, understand human emotion and express its own emotion. In this talk we present our approaches to develop human-like ethical and emotional conversational agents as a part of the Korean Flagship AI Program. The emotion of human users is estimated from text, audio and visual face expression during verbal conversation and the emotion of intelligent agents is expressed by speech and facial expression. Specifically, we will show how our ensemble networks won the Emotion Recognition in the Wild (EmotiW2015) challenge with 61.6% accuracy to recognize seven emotions from facial expression. Then, a multimodal classifier combines text, voice and facial video for better accuracy. Also, a deep learning based Text-to-Speech (TTS) system will be introduced to express emotions. These emotions of human users and agents interacts each other during the dialogue. Our conversational agents have chitchat and Question-and- Answer (Q&A) modes and the agents respond differently for different emotional states in chitchat mode. Then, the internal states will be further extended into trustworthiness, implicit intention and personality. Also, we will discuss how the agents may learn human-like ethics during the human-machine interactions.

Biography :

Soo-Young Lee has worked on the artificial cognitive systems with human-like intelligent behavior based on the biological brain information processing. His research includes speech and image recognition, natural language processing, situation awareness, top-down attention, internal-state recognition and humanlike dialog systems. Especially, among many internal states, he is interested in emotion, sympathy, trust and personality. He led Korean Brain Neuroinformatics Research Program from 1998 to 2008 with dual goals, i.e., understanding brain information processing mechanism and developing intelligent machine based on the algorithm. He is currently the Director of KAIST Institute for Artificial Intelligence and leading Emotional Conversational Agent Project and a Korean National Flagship AI Project. He was the President of Asia-Pacific Neural Network Society in 2017 and had received Presidential Award from INNS and Outstanding Achievement Award from APNNA.

E-mail: sy-lee@kaist.ac.kr

 

Google Scholar citation report
Citations: 1275

Advances in Robotics & Automation received 1275 citations as per Google Scholar report

Advances in Robotics & Automation peer review process verified at publons

Indexed In

 
arrow_upward arrow_upward