Publications by Mark Sinclair
Peter Bell, Catherine Lai, Clare Llewellyn, Alexandra Birch, and Mark Sinclair.
A system for automatic broadcast news summarisation, geolocation and
In Proc. Interspeech (demo session), Dresden, Germany,
[ bib |
An increasing amount of news content is produced in audio-video form every day. To effectively analyse and monitoring this multilingual data stream, we require methods to extract and present audio content in accessible ways. In this paper, we describe an end-to-end system for processing and browsing audio news data. This fully automated system brings together our recent research on audio scene analysis, speech recognition, summarisation, named entity detection, geolocation, and machine translation. The graphical interface allows users to visualise the distribution of news content by entity names and story location. Browsing of news events is facilitated through extractive summaries and the ability to view transcripts in multiple languages.
Peter Bell, Pawel Swietojanski, Joris Driesen, Mark Sinclair, Fergus McInnes,
and Steve Renals.
The UEDIN ASR systems for the IWSLT 2014 evaluation.
In Proc. IWSLT, South Lake Tahoe, USA, December 2014.
[ bib |
This paper describes the University of Edinburgh (UEDIN) ASR systems for the 2014 IWSLT Evaluation. Notable features of the English system include deep neural network acoustic models in both tandem and hybrid configuration with the use of multi-level adaptive networks, LHUC adaptation and Maxout units. The German system includes lightly supervised training and a new method for dictionary generation. Our voice activity detection system now uses a semi-Markov model to incorporate a prior on utterance lengths. There are improvements of up to 30% relative WER on the tst2013 English test set.
Mark Sinclair, Peter Bell, Alexandra Birch, and Fergus McInnes.
A semi-markov model for speech segmentation with an utterance-break
In Proc. Interspeech, September 2014.
[ bib |
Speech segmentation is the problem of finding the end points of a speech utterance for passing to an automatic speech recognition (ASR) system. The quality of this segmentation can have a large impact on the accuracy of the ASR system; in this paper we demonstrate that it can have an even larger impact on downstream natural language processing tasks – in this case, machine translation. We develop a novel semi-Markov model which allows the segmentation of audio streams into speech utterances which are optimised for the desired distribution of sentence lengths for the target domain. We compare this with existing state-of-the-art methods and show that it is able to achieve not only improved ASR performance, but also to yield significant benefits to a speech translation task.
Joris Driesen, Peter Bell, Mark Sinclair, and Steve Renals.
Description of the UEDIN system for German ASR.
In Proc IWSLT, Heidelberg, Germany, December 2013.
[ bib |
In this paper we describe the ASR system for German built at the University of Edinburgh (UEDIN) for the 2013 IWSLT evaluation campaign. For ASR, the major challenge to overcome, was to find suitable acoustic training data. Due to the lack of expertly transcribed acoustic speech data for German, acoustic model training had to be performed on publicly available data crawled from the internet. For evaluation, lack of a manual segmentation into utterances was handled in two different ways: by generating an automatic segmentation, and by treating entire input files as a single segment. Demonstrating the latter method is superior in the current task, we obtained a WER of 28.16% on the dev set and 36.21% on the test set.
Mark Sinclair and Simon King.
Where are the challenges in speaker diarization?
In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
International Conference on, Vancouver, British Columbia, USA, May 2013.
[ bib |
We present a study on the contributions to Diarization Error Rate by the various components of speaker diarization system. Following on from an earlier study by Huijbregts and Wooters, we extend into more areas and draw somewhat different conclusions. From a series of experiments combining real, oracle and ideal system components, we are able to conclude that the primary cause of error in diarization is the training of speaker models on impure data, something that is in fact done in every current system. We conclude by suggesting ways to improve future systems, including a focus on training the speaker models from smaller quantities of pure data instead of all the data, as is currently done.
Peter Bell, Fergus McInnes, Siva Reddy Gangireddy, Mark Sinclair, Alexandra
Birch, and Steve Renals.
The UEDIN english ASR system for the IWSLT 2013 evaluation.
In Proc. International Workshop on Spoken Language Translation,
[ bib |
This paper describes the University of Edinburgh (UEDIN) English ASR system for the IWSLT 2013 Evaluation. Notable features of the system include deep neural network acoustic models in both tandem and hybrid configuration, cross-domain adaptation with multi-level adaptive networks, and the use of a recurrent neural network language model. Improvements to our system since the 2012 evaluation - which include the use of a significantly improved n-gram language model - result in a 19% relative WER reduction on the set.