課程

    Interactive Virtual Characters

    Wednesday, 20 November

    14:15 - 18:00

    Room S424

    In this tutorial, we will describe both virtual characters and realistic humanoid social robots using the same high level models. Particularly, we will describe:

    1) How to capture real-time gestures and facial emotions from real people, how to recognize any real person, how to recognize certain sounds. We will present a state of the art and some new avenues of research.

    2) How to model a variety of interactive reactions of the virtual humans and social robots (facial expressions, gestures, multiparty dialog, etc) depending on the real scenes input parameters.

    3) How we can define Virtual Characters that have an emotional behavior (personality and mood and emotions) and how to allow them to remember us and have a believable relationship with us. This part is to allow virtual humans and social robots to have an individual and not automatic behaviour.
    This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will explain different methods to identify user actions and how to allow Virtual Characters to answer to them.

    4) This tutorial will also address the modelling of long-term and short-term memory and the interactions between users and virtual humans based on gaze and how to model visual attention. We will present the concepts of behavioral animation, group simulation, intercommunication between virtual humans, social humanoid robots and real people.

    Case studies will be presented from the Being There Centre (see http://imi.ntu.edu.sg/BeingThereCentre/Projects/Pages/Project4.aspx) where autonomous virtual humans and social robots react to a few actions from the real people.


    Level

    Intermediate


    Intended Audience

    Researchers, Practitioners, Graduate Students in Animation, Graphics, Virtual Reality, robotics, and Computer Vision.


    Prerequisites

    Computer Graphics and/or Computer Animation or computer vision or user interface.


    Presenter(s)

    Daniel Thalmann, Nanyang Technological University
    Nadia Magnenat Thalmann, Nanyang Technological University


    Prof. Daniel Thalmann is with the Institute for Media Innovation at the Nanyang Technological University, Singapore and EPFL, Switzerland. He is a pioneer in research on Virtual Humans. He is coeditor-in-chief of the Journal of Computer Animation and Virtual Worlds, and member of the editorial board of 6 other journals. He has published more than 500 papers in Animation, and Virtual Reality. He received his PhD in Computer Science in 1977 from the University of Geneva and an Honorary Doctorate from University Paul-Sabatier in Toulouse, France, in 2003. He received the Eurographics Distinguished Career Award in 2010.

    Prof. Nadia Magnenat Thalmann has pioneered research into virtual humans over the last 30 years. She obtained a PhD in Quantum Physics from the University of Geneva (1977). She has published landmark papers on virtual humans, particularly on deformations, cloth and hair simulation. She has received numerous awards, among them two honorary doctorate from the University of Hanover, Germany and the University of Ottawa, Canada. Recently, she receives the Humboldt Research Award in Germany. She is presently Professor and Director of the Institute for Media Innovation at Nanyang Technological University, Singapore and Director of MIRALab at the University of Geneva.