Algorithms for learning by interaction, or reinforcement learning, typically ignore all structure in the environment and consequently tend to scale poorly. The goal of this research is to develop novel, efficient, and theoretically well-founded algorithms and architectures for learning by interaction in structured environments. Three kinds of environmental structure are considered: factorial structure in states and actions, additive structure in payoff functions, and hierarchical structure in states and actions. Such structure is common because many environments are composed from multiple, weakly interacting, components that are often organized hierarchically. The approach consists of exploiting this structure by learning separately for the different components and then compensating in a structure dependent manner for the approximation so introduced. The results of this research will elucidate many different interesting and useful structures common in learning by interaction problems and provide new reinforcement learning algorithms that make it possible to solve significantly larger structured problems than possible with the traditional approach. Possible applications include large-scale, dynamic, resource allocation problems intelecommunications, networking, and scheduling, as well as multi-agent problems from distributed control and artificial intelligence.