This collaborative project, developing and evaluating lifelike, natural computer interfaces as portals to intelligent programs in the context of Decision Support System (DSS), aims at providing a natural interface that supports realistic spoken dialog, non-verbal cues, and the capability of learning to maintain its knowledge current and correct. The research objectives focus around the development of an avatar-based interface with which the DDS user can interact. Communication with the avatar takes place in spoken natural language combined with gesture expressions or by pointing on the screen. The system supports speaker-independent continuous speech input as a spontaneous dialog within the specified DSS domain. A robust backend that can respond intelligently to the questions asked by the DDS user is expected to generate the responses spoken in reply by the avatar with realistic inflection and visual expressions.

The work develops, prototypes, and evaluates the desired user interface capabilities by using the model of a program officer to create a realistic avatar that can answer users' questions and respond in a humanly natural manner. The project extends a current sponsored project on information gathered related to a centers program where a program officer serves as subject matter expert. The recently-developed AlexDSS system that answers questions to users about the I/UCRC program provides the baseline intelligent system behind the avatar. The avatar interfaces are targeted for both general users as well as for experts responsible for updating/correcting the domain knowledge therein.

The work represents a collaborative project between the Intelligent Systems Laboratory (ISL) at UCF and the Electronic Visualization Laboratory (EVL) at UIC. The EVL team focuses on avatar development encompassing Visualization and Interaction with Realistic Avatars and Evaluation of System Naturalness and Usability. The ISL team concentrates on Natural Language Recognition and on Automated Knowledge Update and Refinement.

Project Report

Project Description This project studied how to make computers easier to use for users who are not computer savvy. The research attempted to use human-like computer characters as the primary interface between a user and the computer. Users interact with the computer through natural speech and gesture. The computer in return does the same. Project Outcomes Techniques were developed to enable computer-generated characters to be more easily created and tailored for a variety of applications such as training and education. A study was conducted to determine how well users responded to the computer-generated characters. Results were favorable, and in fact users who were not computer savvy tended to address the character as if it were a real person. Broad Impact The technology can be applied to a number of important training scenarios such as virtual reality training for the military, workforce training for individuals, online education, and museum education. The technology can also be used to help archive individuals who have passed away so that generations to come will still be able to communicate with them as if they were still alive. The technology can also be used in video game applications, which provides employment for hundreds of people per project, and often have budgets at around $100M. Video games are today a larger industry than Hollywood, and therefore provides even greater employment to Americans.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Application #
0703916
Program Officer
Rita V. Rodriguez
Project Start
Project End
Budget Start
2007-02-15
Budget End
2012-01-31
Support Year
Fiscal Year
2007
Total Cost
$585,872
Indirect Cost
Name
University of Illinois at Chicago
Department
Type
DUNS #
City
Chicago
State
IL
Country
United States
Zip Code
60612