The Centre for Speech Technology Research, The university of Edinburgh

Publications by Martin I. Tietze

[1] Andi K. Winterboer, Martin I. Tietze, Maria K. Wolters, and Johanna D. Moore. The user-model based summarize and refine approach improves information presentation in spoken dialog systems. Computer Speech and Language, 25(2):175-191, 2011. [ bib | .pdf ]
A common task for spoken dialog systems (SDS) is to help users select a suitable option (e.g., flight, hotel, and restaurant) from the set of options available. As the number of options increases, the system must have strategies for generating summaries that enable the user to browse the option space efficiently and successfully. In the user-model based summarize and refine approach (UMSR, Demberg and Moore, 2006), options are clustered to maximize utility with respect to a user model, and linguistic devices such as discourse cues and adverbials are used to highlight the trade-offs among the presented items. In a Wizard-of-Oz experiment, we show that the UMSR approach leads to improvements in task success, efficiency, and user satisfaction compared to an approach that clusters the available options to maximize coverage of the domain (Polifroni et al., 2003). In both a laboratory experiment and a web-based experimental paradigm employing the Amazon Mechanical Turk platform, we show that the discourse cues in UMSR summaries help users compare different options and choose between options, even though they do not improve verbatim recall. This effect was observed for both written and spoken stimuli.

[2] Martin I. Tietze, Andi Winterboer, and Johanna D. Moore. The effect of linguistic devices in information presentation messages on recall and comprehension. In Proceedings ENLG09, 2009. [ bib | .pdf ]
[3] Martin Tietze, Vera Demberg, and Johanna D. Moore. Syntactic complexity induces explicit grounding in the MapTask corpus. In Proc. Interspeech, September 2008. [ bib | .pdf ]
This paper provides evidence for theories of grounding and dialogue management in human conversation. For each utterance in a corpus of task-oriented dialogues, we calculated integration costs, which are based on syntactic sentence complexity. We compared the integration costs and grounding behavior under two conditions, namely face-to-face and a no-eye-contact condition. The results show that integration costs were significantlyhigher for explicitly grounded utterances in the no-eye-contact condition, but not in the face-to-face condition.