The purpose of this award is to plan and hold a two-day workshop which brings together leading international researchers in the field of natural language generation (NLG), with the aim of establishing a clear, community-wide, position on the role of shared tasks and comparative evaluation in NLG research. In recent years, shared-task evaluation campaigns (STECs) have become increasingly popular in natural language understanding. In a STEC, different approaches to a well-defined problem are compared based on their performance on the same task. The NLG community has so far withstood this trend, but there are a significant number of researchers in the community who believe that some form of shared task, and corresponding evaluation framework, would be of benefit in providing a focus for research in the field. However, there is no clear consensus on what such a shared task should be, or whether there should be several such tasks, or what the evaluation metrics should be.

The aim of the workshop is to provide a forum that provides the time and involvement required to subject the different views to rigorous debate. We expect the workshop to result in the working out of a number of clearly argued positions on the issue, including basic specifications for a variety of shared task evaluation campaigns that can then be considered by the wider community. The outcomes of the workshop will be documented in a report, disseminated via the workshop website, summarizing the workshop discussions and including the participants' contributions.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0710730
Program Officer
Tatiana D. Korelsky
Project Start
Project End
Budget Start
2007-01-15
Budget End
2007-12-31
Support Year
Fiscal Year
2007
Total Cost
$23,000
Indirect Cost
Name
Ohio State University
Department
Type
DUNS #
City
Columbus
State
OH
Country
United States
Zip Code
43210