The goal of this research project is to create new algorithms for numerical function approximation that are particularly suited to value function approximation in reinforcement learning. These algorithms localize the approximation error in the domain of the function, and respond by constructing features that enable further error reduction. It is essential that these algorithms be able to approximate well even when the function appears to be changing as learning progresses. The results of this research will be algorithms that reduce error in a repeatable manner, and that produce an approximation that is inspectable and understandable. Application tasks for the research will include instruction scheduling and autonomous agent policy learning. The research aims to enable practioners who employ automatic learning methods to achieve more accurate and more understandable results with less human engineering.