This research focuses on building testable computational models of deception including the major sub-phenomena of trust, expectation, suspicion, surprise, deception plans, and manufactured patterns. Such models and an associated theory can be used to explain both offensive deceptions (to gain some advantage) and defensive deceptions (to foil someone else's plans). Using these models, the research will develop deceptive software as a second line of defense for computer systems systems under attack when access controls have been breached. Deception can mislead attackers as to the state of an information system with false error messages, deliberate delays in executing commands, lies about task completion, fake displays, disinformation about computer resources, and coordinated fake clues. Producing a convincing deception requires careful planning because people can often recognize suspicious patterns. So this research will develop plans to apply deception sparingly and thoughtfully based on a theory of trust and its psychological consequences. This will include ideas such as counterplanning against plans and a general theory of the effectiveness of excuses. Other issues to be addressed include the penalty of deceiving nonmalicious users and the ethical concerns raised by deliberate deception.