This work investigates the effects of discourse context on explanations in tutorial settings. When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material and to avoid repeating old material that would distract the student from what is new. Strategies for using the discourse history in generating utterances are of great importance toresearch in natural language generation for tutorial applications. The goal of this work is to produce a computational model of how tutors exploit the discourse history in instructional settings, and to implement this model in an intelligent tutoring system that maintains a dialogue history and uses it in planning its explanations.