This project is developing theoretical and practical foundations for communicative agents that overtly exhibit self-awareness. Such agents are aware of what they know and do not know, what they can and cannot do, and what intentions and assumptions underlie their inferences and actions, and they show "presence of mind" in dialogue interactions, recalling past experiences and reasoning about the ongoing interactions. The theoretical ideas under development for this purpose center around theories of autocognitive knowledge and inference (knowing what one knows and does not know, and how one acquires knowledge), a new approach to integrating uncertain inference with logic (quasi-probability theory), and a form of "syntactically aware" metainference that facilitates reasoning about a system's own knowledge and special capabilities. The project uses the EPILOG inference engine as a basis for experimentation, while broadening that engine's capabilities. The knowledge representation (episodic logic) used by EPILOG nearly matches the expressive capabilities of ordinary language and is thus well-suited to the inferential and dialogue goals of this project. The results will include demonstrable theories of self-awareness in artificial agents, new methods of general autocognitive, uncertain, and syntactically aware inference, and an improved EPILOG engine. The results will be disseminated in major conferences and journals for AI, agent and dialogue research, and the improved EPILOG engine will be generally available. In the longer run, the work will help pave the way for a new generation of more human-like interactive AI agents, in such areas as software support, advice, tutoring, games, etc.