The goal of this research is to develop a system which automatically generates and animates conversations between multiple cooperative agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. The research is based on theory which addresses relations and coordinations between these channels. The significance of this research is to provide a three-dimensional computer animation testbed for cooperative conversation theories. Human-machine interaction and training systems need more interactive and cooperative synthetic agents. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the content of text, the intonation and the undertaken actions all determine facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. This project will focus on domains in which agents must propose and agree on abstract plans, and may have to motivate and carry out physical actions and refer to objects in their physical environment during conversation.