This project develops the theory and algorithms for the next generation of statistical motion models and their applications in Bayesian motion synthesis. Thus far, one of the most effective ways to model human movement is to construct statistical motion models from prerecorded motion data. While the promise of learning from motion data is unlimited, current statistical motion modeling techniques suffer from four major limitations. Firstly, they lack scalability and ability to model large and heterogeneous datasets. Secondly, they do not capture environmental contact information embedded in prerecorded motion data. Thirdly, they are mainly focused on modeling spatial-temporal patterns within a small temporal window rather than the global motion structures of human actions and thus face great risk of destroying global motion structures in motion generalization. Lastly and most importantly, they do not consider dynamics that cause the motion. This project investigates a new generation of statistical motion models that address these four challenges. The project also develops new Bayesian motion synthesis algorithms that leverage the proposed generative models in graphics and vision applications. In addition, the research produces new animation modeling systems for novices, new performance interfaces for full-body motion control, and new technologies for video-based motion capture. In the project, the PI makes special efforts to recruit students from under-represented groups, to integrate the research into existing and new courses, and to use a high school competition as a channel for attracting more young students to pursue careers in computer science.