In the proposed research, highly reliable memories made of unreliable components will be developed and characterized in terms of their complexity and ability to retain the stored information. The main challenge is that in nano-scale systems both the storage elements and logic gates are faulty. It is in contrast to the state-ofthe- art systems where only the memory elements are considered unreliable while error correction encoders and decoders are assumed to be made of reliable logic gates. The set of problems that will be addressed in this research can be condensed into the following question: given n memory cells and m universal logic gates which fail following a known random mechanism, what is the optimal memory architecture which stores the maximum number of information bits for the longest period of time with arbitrary low probability of error? This complex problem can be divided and reformulated in many ways, but, interestingly, even some of the most fundamental questions related to this problem are still unanswered. The most important question is related to the following two fundamentally different approaches in fault-tolerant memories: (i) To improve reliability, the logic gate resources may be invested into a von Neumann multiplexing scheme. In this way, one can build highly redundant reliable networks that simulate the function of universal logic gates, and then use such better gates to build an error correction encoder and decoder. (ii) Alternatively, the logic gate resources may be invested into building a more powerful error correcting code (i.e., decoder) capable of handling both memory elements as well as logic gates errors. Which of these two approaches is optimal for a given failure mechanism? On a broad scale, is it better to deal with a reliability issue on a device or on a system level? Intellectual Merit: The unique feature of the nano-systems that both the storage elements and logic gates are unreliable makes the problem of ensuring fault-tolerance theoretically very important, because the process of error correction is not error-free as assumed in classical information theory. Making error correcting codes stronger and transmitters and receivers more complex will not necessarily improve the performance of a system. It is likely that for a given failure mechanism, there is a trade off between receiver complexity and its performance. Our approach to developing fault-tolerant memory architectures is based on a method developed by Taylor and refined by Kuznetsov. Taylor and Kuznetsov (TK) showed that memory systems have nonzero computational (storage) capacity, i.e. the redundancy necessary to ensure reliability grows asymptotically linearly with the memory size. Two fundamental open problems that will be addressed in this research are determining storage capacity of nano-scale memories and the development of capacity approaching fault-tolerant architectures. The equivalence of the restoration phase in the TK method and faulty Gallager-B algorithm (as explained in Project Description), will enable us to tackle these and other important problems in reliable storage on unreliable media using the large body of knowledge in codes on graphs and iterative decoding gained in the past decade. Broader Impact: This program will contribute significantly to the evolution of data storage technologies and the information infrastructure in the United States of America and abroad. Another important aspect is the establishment of a tight interdisciplinary integration of knowledge in coding and signal processing, and nano-scale devices and subsystems into programs benefiting undergraduate and graduate students at the University of Arizona and the industrial technical research community.