This project explores a multimodal corpus for vision-based meeting analysis. The research team is working on: (1) extracting the data from tapes and organizing them into multimedia databases; (2) developing a database visualization and analysis tool to support model development; and (3) developing an agent-based algorithm to extract hand and head tracking information so that higher level models may be built onto the data.

The project provides datasets that are organized into a usable corpus with many unique properties, such as the ground truth at the psycholinguistic/psycho-social level of the social roles status, purpose of each meeting, and at the video level in the form of motion tracking data collected co-temporally with the video, for developing and testing new algorithms. The developed tools improve the access to the multimedia database of multi-view group human behavior. The agent-based approach provides a novel way in video annotation. The developed tools and algorithms from this project can be applied to many other applications. For example, the tools may be applied to analyze classroom behavior and in learning scenarios. The project provides research opportunities for undergraduate and graduate students including women and individuals from underrepresented populations. The project outreaches to the user communities through publications, presentations, web presence, and broader collaborative interactions.

Project Start
Project End
Budget Start
2013-11-19
Budget End
2015-08-31
Support Year
Fiscal Year
2014
Total Cost
$9,566
Indirect Cost
Name
Texas A&M University
Department
Type
DUNS #
City
College Station
State
TX
Country
United States
Zip Code
77845