The traditional way to realize computation in hardware is to devise an algorithm that generates the desired result and then implement that algorithm using combinational logic. Another way to realize computation in hardware is rote memorization, i.e., precompute all the results and store them in memory, a method we call Computational Memory. Our premise is that Computational Memory has properties that are particularly useful as technologies scale to higher densities, but lower reliability and higher performance variability. Memories are very regular and thus very dense, easy to test, and dissipate power much more smoothly than logic. Fault-tolerance on memories and interconnects can, in principle, be systematically implemented using error-correcting codes and/or error detection and recomputation, providing an avenue to address both reliability and performance variability in ultrascaled technology substrates. Such computational memory blocks are cleanly composable, making fault-tolerant designs more tractable with much higher fault coverage. However, even with the high densities that are at our disposal, the exponential memory requirements of a naive implementation of Computational Memory calls into question its practicality. The proposed research will show that this scalability problem can be overcome through the combined use of alternative number systems, coding, caching and architectural techniques. If this work is successful, programmable fabrics based on Computational Memory might emerge as a standard computing framework for the nanotechnology era. This might someday enable inexpensive "computing sticks" that provide portable computing power, along with persistent storage in the form-factor of today?s flash memory.