AI planners have traditionally made certainty and simplicity assumptions which have obscured two important distinctions: that the time at which a plan is executed differs from the time at which the plan is built, and that the planning agent's model of the world will differ from the real world. Relaxing these assumptions indtroduces the problem of maintaining a planning agent's world model, which will be both imcomplete and dynamic. A framework for representing and maintaining such a model, has been developed which takes into account reports from external sources (e.g. sensors), the agent's proposed actions or plans, and external forces that may aid or confound those plans. Extensions to this framework will allow the agent to reason about (1) execution-time reports from sensors, thus allowing it to detect planning failures and learn from bad predictions, and (2) complex causal structures, thus allowing it to represent more realistic physical systems (like the ones studied in the qualitative-physics literature).