Keynote speakers

Jacqueline Nadel, CNRS, USR 3246, Centre Emotion, University Pierre & Marie Curie, Paris, France

Jacqueline Nadel is an emeritus Research Director at La Salpetriere Hospital, Emotion Centre, National Centre of Scientific Research. She is the author of numerous papers and the editor of several books at Cambridge University Press and Oxford University Press. She is especially involved in interdisciplinary programs interfacing social neuroscience, cognitive psychology, epigenetic robotics and clinical interventions for non verbal individuals with severe autism. Her studies are based on innovative designs allowing an online approach of non verbal parameters of communication, especially via reciprocal imitation. Additionally, she edits the French scientific journal ENFANCE, coordinates the interdisciplinary network Autisme-Science, and is an expert for EU scientific projects.

Thursday 29th August: The Interacting Body and the role of Imitation

Abstract: Imitation can be seen as a test case for the study of the interacting body. When imitating or being imitated, the usual coupling of perception and action in our body is completed by a cross-coupling, each partner perceiving the movement done as an efferent signal coming from self and altogether as an afferent signal coming from the other. Hence our individual body knowledge is fed by interindividual encounters of bodily movements. From birth on, imitation introduces the body with rich opportunities for such social interaction. Imitative interactions come to a peak between 24 and 30 months, as if bodily similarity was the privileged field for communication before language. Non-verbal children with autism can benefit also from this resource in its two functions: learning and communication. Our fMRI studies show that spontaneous imitation and being imitated engage brain areas devoted to social cognition. Our hyperscanning EEG recordings of couples of adult brains during synchronous imitative interaction reveal interbrain synchronizations of the centroparietal regions in the alpha band. This talk proposes an exploration of the role of imitation in motor interaction at the behavioral and brain level. Within a dynamical perspective, renewed body-knowledge is seen as an emergent property of interacting bodies.

Charles Sutton, University of Edinburgh, Edinburgh-UK

Charles Sutton is a lecturer in machine learning (equivalent to an assistant professor in other countries) at Edinburgh University. He completed his PhD in 2007 at the University of Massachusetts Amherst, on approximate inference and learning methods for conditional random fields, where his dissertation was nominated for an ACM Doctoral Dissertation Award. For the next two years, he did postdoctoral work at the University of California Berkeley. His research interests include machine learning methods applied to a wide variety of applications, including natural language processing, maintenance and debugging of distributed systems, sustainable energy, and software engineering.

Friday 30th August: Probabilistic Machine Learning in the Wild

Abstract: Many people know about tools that machine learning provides for classification and clustering, but applications sometimes demand more sophisticated tools. For example, it is often necessary to predict labels that depend on each other, such as labels of different regions or different words in a sentence. Or it may be necessary, such as in medical diagnosis or in debugging computer programs, to learn about a latent cause that underlies the variables that we are to predict. Probabilistic machine learning provides a powerful set of tools for these problems, because probabilities provide a common language for labelling decisions to interact. But probabilistic methods can be difficult to apply, in part because they require a kind of backwards thinking to map problems to solutions. In this talk, I will try to bridge this gap by example, describing several diverse applications of probabilistic machine learning, including natural language processing, object tracking, and software engineering---partially in the hope of stimulating discussion of further new applications.

Beth Ann Hockey, Intel corporation

Beth Ann Hockey currently manages a research and development group at Intel corporation. Prior to coming to Intel, Beth Ann headed the Language Technology consulting company BAHRC LLC. She has a Bachelors degree in Linguistics from UC Davis, an MSE in Computer Science and a Ph.D in Linguistics from the University of Pennsylvania, and over 20 years of experience in computational linguistics and speech and language technology. Her areas of expertise include spoken dialogue systems, language modeling, dialogue management, targeted help systems, grammar development, speech translation, discourse and phonetics. During her time as a NASA contractor, Dr. Hockey was a founding member of the NASA AMES RIALIST spoken dialogue group. She was the project lead as well as a principle designer and developer of the Clarissa Procedure Navigator. The Clarissa system, designed to assist astronauts in executing procedures on the International Space Station, became the first spoken dialogue system used in space in 2005. Other projects at NASA Ames included, being a core developer for the Regulus Open Source project, and PI of the NASA Intelligent Systems "Robust Reusable Speech Recognition" project. She was also a developer for the MedSLT medical speech translation project. She was a Senior Research Scientist at UC Santa Cruz University Affilated Research Center and an adjunct Associate Professor of Linguistics. At UC Santa Cruz, she was the PI of a UC Santa Cruz/Ford Motors research project on dialogue systems in cars, and the instructor for an innovative course on Spoken Dialogue Systems. As a visiting scientist at Microsoft, she worked with Search Labs group on integrating NLP with their statistical techniques. Beth Ann is the author of over 50 refereed publications and is a co-author of the book "Putting Linguistics Into Speech Recognition: The Regulus Grammar Compiler" (2006 CSLI Press).

Friday 30th August: Researchers are from Mars; industry is from Venus: interplanetary travel for intelligent virtual agents

Alessandro Vinciarelli, University of Glasgow, Glasgow-UK

Alessandro Vinciarelli is a Lecturer at the University of Glasgow (UK) and a Senior Researcher at the Idiap Research Institute (Switzerland). His main research interest is in Social Signal Processing, the domain aimed at modelling analysis and synthesis of nonverbal behaviour in social interactions. In particular, Alessandro has investigated approaches for role recognition in multiparty conversations, automatic personality perception from speech, and conflict analysis and measurement in competitive discussions. Overall, Alessandro has published more than 80 works, including one authored book, three edited volumes, and 22 journal papers. Alessandro has participated in the organization of the IEEE International Conference on Social Computing as a Program Chair in 2011 and as a General Chair in 2012, he has initiated and chaired a large number of international workshops, including the Social Signal Processing Workshop, the International Workshop on Socially Intelligence Surveillance and Monitoring, the International Workshop on Human Behaviour Understanding, the Workshop on Political Speech and the Workshop on Foundations of Social Signals. Furthermore, Alessandro is or has been Principal Investigator of several national and international projects, including a European Network of Excellence (the SSPNet,, an Indo-Swiss Joint Research Project ( and an individual project in the framework of the Swiss National Centre of Competence in Research IM2 ( Last, but not least, Alessandro is co-founder of Klewel (, a knowledge management company recognized with several awards.

Saturday 31st August: Social Signal Processing: Understanding Social Interactions Through Nonverbal Behavior Analysis

Abstract: Social Signal Processing is the domain aimed at modelling, analysis and synthesis of nonverbal behaviour in social interactions. The core idea of the field is that nonverbal cues, the wide spectrum of nonverbal behaviours accompanying human-human and human-machine interactions (facial expressions, vocalisations, gestures, postures, etc.), are the physical, machine detectable evidence of social and psychological phenomena non otherwise accessible to observation. Analysing conversations in terms of nonverbal behavioural cues, whether this means turn-organization, prosody or voice quality, allows one to automatically detect and understand phenomena like conflict, roles, personality, quality of rapport, etc. In other words, analysing speech in terms of social signals allows one to build socially intelligent machines that sense the social landscape in the same way as people do. This talk provides an overview of the main principles of Social Signal Processing and some examples of their application.