The Centre for Speech Technology Research, The university of Edinburgh

Posters

The AMI project
Mike Lincoln
Structural representation and matching of articulatory speech structures based on the evolving transformation system (ETS) formalism
Alexander Gutkin
A method for automatically defining a unit inventory for ASR
Fiona Kenney
Applying Vocal Tract Length Normalization to Meeting Recordings
Giulia Garau
A Hybrid ANN/DBN Approach to Articulatory Feature Recognition
Joe Frankel
SVitchboard 1: Small vocabulary tasks from Switchboard 1
Simon King
Modelling Speech Dynamics with a Trajectory Model
Zhang Le
Analysis and Synthesis of Head Motion for Lifelike Conversational Agents
Hiroshi Shimodaira
Filling Pauses Due to Slow Reaction Times of Lifelike Conversational Agents
Verena Achenbach
Human-Computer Dialogue Simulation Using HMMs
Heriberto Cuayáhuitl
Automatic Meeting Segmentation using Dynamic Bayesian Networks
Alfred Dielmann
Acoustic Pulse Reflectometry for Vocal Tract Measurement
Calum Gray
Estimating detailed spectral envelopes using articulatory clustering
Yoshinori Shiga
Festival and Windows: Together at last
Briony Williams
Informed Blending of Databases for Emotional Speech Synthesis
Gregor Hofer
Cougar
Korin Richmond
Gorbals Speech Synthesis
Mark Fraser
Predicting Consonant Duration With Bayesian Belief Networks
Olga Goubanova
CombiLex: a multi-dimensional inheritance lexicon for regional pronunciations
Susan Fitt
Source-filter separation for articulation-to-speech synthesis
Yoshinori Shiga

Demos

Welsh diphone voice in Festival running in Windows under full MSAPI
Briony Williams
Gorbals Speech Synthesis
Mark Fraser
(Festival)
Rob Clark
Instrumented Meeting Room and the AMI Project
Mike Lincoln
Output Generation in the COMIC multimodal dialogue system
Mary Ellen Foster, Michael White, Andrea Setzer, Roberta Catizone
Filling Pauses Due to Slow Reaction Times of Lifelike Conversational Agents
Verena Achenbach