This project is exploring the integration of video and multiscale visualization facilities with computer vision techniques to create a flexible open framework to advance analysis of time-based records of human activity. The goals are to (1) accelerate analysis by employing vision-based pattern recognition capabilities to pre-segment and tag data records, (2) increase analysis power by visualizing multimodal activity and macro-micro relationships, and coordinating analysis and annotation across multiple scales, and (3) facilitate shared use of the developing framework with collaborators. Researchers from many disciplines are taking advantage of increasingly inexpensive digital video and storage facilities to assemble extensive data collections of human activity captured in real-world settings. The ability to record and share such data has created a critical moment in the practice and scope of behavioral research. The main obstacles to fully capitalizing on this scientific opportunity are the huge time investment required for analysis using current methods and understanding how to coordinate analyses focused at different scales so as to profit fully from the theoretical perspectives of multiple disciplines. Thus, any research using video or other time-based records in order to document or better understand human activity is a potential beneficiary of this research, and the long range objective is to better understand the dynamics of human activity as a scientific foundation for design.