The goal of this project is to create a speech interface that supports a user in interacting with multiple real-time devices at the same time, where the interaction with each device is a separate dialogue thread. The first aim is to show, using a human-computer study, that the simple way to implement a speech interface for managing multiple threads is not effective. The second aim is to run a human-human study to show that people can inherently manage multiple dialogue threads, and to determine what conventions they use. The third aim is to build a speech interface that implements the conventions that were found.
The main impact of this work is the development of a model that accounts for how people deal with multi-threaded dialogues. This model will be demonstrated in an implemented speech interface. This work will create a technology that will be useful in interacting with the pervasive electronic devices that we can expect to see in the future.