Programme
Download the full programme (pdf) here. Below is a list of work to be presented.
Invited Speakers
- Catherine Pelachaud (CNRS, LTCI, TELECOM ParisTech, France)
- Bernd Bickel (Disney Research, ETC Zürich, Switzerland)
Oral Presenters
- A Talking Head for Speech Tutoring
Priya Dey, Steve Maddock, Rod Nicolson (Uni. Sheffield) - Photorealistic 2D Audiovisual Text-to-Speech Synthesis using Active Appearance Models
Wesley Mattheyses and Werner Verhelst (Vrije Universiteit Brussel) - Synthesizing Head and Facial Movement Disorders on Android Robots
Laurel D. Riek and Peter Robinson (Uni. Cambridge) - A FACS Validated 3D Human Facial Model
Darren Cosker (Uni. Bath), Eva Krumhuber (Uni. Geneva), Adrian Hilton (Uni. Surrey) - FACSGen 2.0: Facial Expression Animation Based on FACS
Eva Krumhuber (Uni. Geneva), Lucas Tamarit (Uni. Geneva), Etienne Roesch (Uni. Reading), Klaus Scherer (Uni. Geneva) - Behaviour Transfer Between Expressive Talking Heads
Andrew Aubrey, David Marshall, Paul Rosin (Cardiff Uni.) - Facial Animation for Real-Time Conversing Groups
Rachel McDonnell (Trinity College Dublin) - Decoding Emotions from Facial Animations
Shazia Afzal (Uni. Cambridge), Tevfik Metin Sezgin (Koç University), Peter Robinson (Uni. Cambridge) - Towards Building a 4D Morphable Face Model
Martin Breidt, Heinrich H. Bülthoff, Cristóbal Curio (MPI) - Performance-Driven Facial Animation in Industry
Steven Caulkin (Cubic Motion, Daresbury, UK)
Posters
- A Practice-Led Approach to Facial Animation Research
Robin J.S. Sloan, Brian Robinson, Ken Scott-Brown, Fhionna Moore, Malcome Cook (Uni. Abertay Dundee) - Function over Form: An Identity-Free Dynamic Facial Avatar
Harry J. Griffin, Peter W. McOwan, Alan Johnston (UCL; Queen Mary, Uni. London) - Expressive Audiovisual SMS Reading
Alex Garcia Gonçalves, José Mario De Martino (Uni. Campinas) - Face Tracking and Head Pose Estimation using Convolutional Neural Networks
Stylianos Asteriadis, Kostas Karpouzis, Stefanos Kollias (Nat'l Technical Uni. of Athens) - Compact 2D Facial Animation Based on Context-Dependent Visemes
Paula Dornhofer Paro Costa, José Mario De Martino (Uni. Campinas) - Analysis of Colour Space Transforms for Person Independent AAMs
Tadas Baltrusaitis, Peter Robinson (Uni. Cambridge) - Comparison of Techniques for Audio Driven Facial Animation
Benjamin Havell, David Marshall, Yulia Hicks, Paul Rosin, Saeid Sanei, Andrew Aubrey (Cardiff Uni.) - Comparing Feature-based Metrics for Facial Dynamics Analysis
Andrew J. Aubrey, Gary K.L. Tam, David Marshall, Paul L. Rosin (Cardiff Uni.), Hui Fang, Phil W. Grant, Min Chen (Swansea Uni.) - Perception of Gaze Direction in 2D and 3D Facial Projections
Jonas Beskow and Samer Al Moubayed (KTH) - Perception of Nonverbal Gestures of Prominence in Visual Speech Animation
Samer Al Moubayed and Jonas Beskow (KTH) - Character Animation from Audio: Speech Articulation and Beyond
Sasha Fagel (zoobe message entertainment GmbH)
Industry Demos
- Dimensional Imaging: 4D capture technology
- Cereproc: Characterful speech synthesis with embodied conversational agents