Efficient algorithms to learn complex concepts from examples and other kinds of information are sought. A major focus is on the `other kinds of information` that may serve to reduce the computational difficulty of learning complex concepts, including queries to a teacher, advantageous order of presentation of examples, the presence of related concepts, graded (as opposed to Boolean) inputs and responses, and constraints imposed by real-world situations (e.g., robot navigation, language learning). Techniques from computational complexity, the theory of automata and formal languages, the analysis of algorithms, and cryptography are applied to narrow the gap between provable efficiently learnable and provably not efficiently learnable classes of concepts. The problem of omissions and errors in the `other kinds of information` are also addressed. The problem of approximating a distribution by sampling is studied as a method of extending Valiant's paradigm of `probably approximately correct` identification to problems in which negative examples are absent.