9113787 Quek This the third year of a three year continuing grant. This research is to study computer vision methods for tracking hand gestures. Gestures are followed by tracking a light emitting diodes (LED) adorned glove in front of a video system. Other glove marking methods will also be examined. Gesture interpretation will be performed by a dynamic vision system using the correspondence via event detection (CED) algorithm which segments and tracks the motion of the LEDs over multiple video frames. The primary focus of the research will be to extend existing algorithms to handle occlusion. Constraints inherent in hand anatomy, function and gesture categories will be expoited. Voice input will be implemented using commercial disconnected speech recognition technology. The voice commands will be used segment gesture epochs and to disambiguate the gesture categories. Rules for combining gesture interpretation and voice for gesture stream segmentation will be developed. A gesture library constituting a taxonomy of gestures will be used for temporal coordination and conflict resolution. Both monocular and stereo camera environments will be used. Parallelization of the algorithm will be done for to assure real-time capability and to support subsequent research where bare hands might be tracked without the use of markers.