Normalization, matching, and unification are three computational steps found in virtually all symbolic computation systems. Applications of these operations include term rewriting, functional, equational and logic programming languages, abstract data type specification and verification, automated deduction, code generation, type inferencing, and computer algebra. This project addresses these problems in computing and the means to design and analyze efficient algorithms for them. Practical algorithms are designed on both sequential and parallel machines. Algorithms which learn efficiently from their history are designed for normalization with respect to several important and useful classes of rewrite systems. Several facets of matching and unification problems are studied from a novel perspective. Algorithms that correlate information from previous match/unify operations are designed for tree pattern matching and its interpreted versions, and subterm unification and its interpreted versions. The practical performance of these algorithms is investigated. This research is useful in enhancing the efficiency of symbolic systems as well as provide understanding of the inherent complexities of these operations.