Providing virtual training has become an increasingly important method for training skills that are risky, expensive, or otherwise infeasible to carry out in the real world (e.g., military operations). One long standing question has been how well skills learned in a simulated training environment can be transferred to real world practice. The simulated world differs from the real world in a number of aspects. In particular, there are significant differences in sensory, motor, and perceptual features. Advances in embodied cognitive science have consistently demonstrated how the human body, and the environment which the body inhabits, together form a complex system in producing mental activities. However, most studies that have been reported thus far only examined small incremental differences in cognitive processes as predicted by the embodied versus classic cognitive science. There are few studies that have investigated whether the incremental differences will translate into tangible consequences in learning new skills. The central hypothesis being investigated in this research project is whether there are differences in how people perceive and enact risky versus non-risky actions in the simulated versus the real world. One of the major difficulties in such studies is to create simple parallel task environments that are amenable to controlled experimentation but can be scaled up for real world applications. This research project accomplishes this by selecting everyday actions that can be inherently risky and uses methods that examine the time course in which event perception unfolds. For perceiving actions, participants will perform perceptual segmentation tasks, have their eye movements recorded, and answer questions regarding the memory of actions. For enacting actions, participants will indicate how they would complete the actions.

The long-term practical objective of this research is to provide an empirical basis for developers of simulated training environments in the following three areas: 1) determining the appropriate level of specifications for perceptual and sensory information; 2) determining the appropriate level of instructions to either highlight or compensate some of the significant differences that result from enacting actions in the simulated world; and, 3) determining what actions can be learned via simulations versus what actions ought to be learned via real world practices. This research project brings together an interdisciplinary integration of theory and empirical research from the fields of embodied cognitive science, artificial intelligence, human-computer interaction, and robotics. Cognitive psychology has not begun, until recently, to understand how humans spontaneously segment the constant flux of multimodal information into discrete events and exert cognitive control over the ongoing world. This research project goes a step forward towards conceptualizing human and complex machine behaviors in terms of the multimodal segmentations of the incoming world. The better design of simulated environments will result from this understanding and will benefit all of society, in which we are increasingly interacting with technology and being trained in technology driven learning environments.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0742109
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2007-09-01
Budget End
2008-08-31
Support Year
Fiscal Year
2007
Total Cost
$70,000
Indirect Cost
Name
Texas A&M University-Commerce
Department
Type
DUNS #
City
Commerce
State
TX
Country
United States
Zip Code
75429