The purpose of this award is to plan and hold a two-day workshop which brings together leading international researchers in the field of natural language generation (NLG), with the aim of establishing a clear, community-wide, position on the role of shared tasks and comparative evaluation in NLG research. In recent years, shared-task evaluation campaigns (STECs) have become increasingly popular in natural language understanding. In a STEC, different approaches to a well-defined problem are compared based on their performance on the same task. The NLG community has so far withstood this trend, but there are a significant number of researchers in the community who believe that some form of shared task, and corresponding evaluation framework, would be of benefit in providing a focus for research in the field. However, there is no clear consensus on what such a shared task should be, or whether there should be several such tasks, or what the evaluation metrics should be.
The aim of the workshop is to provide a forum that provides the time and involvement required to subject the different views to rigorous debate. We expect the workshop to result in the working out of a number of clearly argued positions on the issue, including basic specifications for a variety of shared task evaluation campaigns that can then be considered by the wider community. The outcomes of the workshop will be documented in a report, disseminated via the workshop website, summarizing the workshop discussions and including the participants' contributions.