Driven by the needs of mobile and cloud computing, demand for data storage is exhibiting steep growth, both in the direction of higher storage density as well as a simultaneous ambitious increase in access performance. A related exciting emerging trend driven by access challenges is in-memory computing, whereby computations are offloaded from the main processing units to the memory to reduce transfer time and energy. The challenges of future rapid storage access and in-memory computing cannot be addressed by the conventional storage architectures that inevitably trade off reliability and capacity for latency. This bottleneck calls for innovative research contributions that can simultaneously maximize the storage density, access performance, and computing functionality. This project addresses this imminent challenge by developing principled mathematical foundations that will underpin future computing systems possessing qualities necessary to address new data-intensive applications, focusing on fundamental performance bounds, algorithms, and practical channel coding methods. The results of this project will be demonstrated on modern data-driven and machine learning applications, will advance the repertoire of mathematical techniques in information sciences, and will directly impact future computer system architectures to meet the growing and wide ranging societal and scientific needs for computing and rapid data processing. Additionally, the proposal offers several mechanisms for broader impacts, including engagement with data storage and memory industry through the existing research center that the principal investigator is leading at UCLA, curriculum development and the introduction of new graduate courses in the UCLA on-line master's program in engineering, engagement of undergraduate researchers, and dissemination of the results through survey-style articles and tutorials.

Part 2:

The project has the following three complementary research goals:

1) Invention of new channel codes for reliable and fast memory access for latency sensitive applications, with the study spanning general memories and specific schemes for resistive memories in particular. The proposed schemes will offer non-trivial extensions to vibrant coding subjects: codes with locality (algebraic and graph-based) and constrained coding;

2) Invention of new channel codes for which the decoding is performed directly in memory to enable simultaneously satisfying competing requirements on latency and reliability. Here, the decoder itself is subject to computational errors, themselves manifested in a data dependent sense. The analysis will lead to bounds and practical code designs robust to data-dependent errors. An exemplar will be codes designed using spatial coupling and decoded using windowed message passing decoders;

3) Development of novel fundamental bounds, algorithms, and channel codes for robust in-memory computing, with the focus on quantifying the robustness of computing primitives in statistical inference and other machine learning algorithms used in modern data-driven applications. These include fundamental performance limits and new coding-based methods to simultaneously combat sneak paths and computing noise. Analysis will include coding for (noisy) Hamming/Euclidean similarity calculations, evaluated in the context of practical machine learning applications.

Results from this project will also contribute to the curriculum development at UCLA and will offer new opportunities for the engagement of undergraduate researchers from underrepresented groups.

Project Start
Project End
Budget Start
2017-09-01
Budget End
2020-08-31
Support Year
Fiscal Year
2017
Total Cost
$470,000
Indirect Cost
Name
University of California Los Angeles
Department
Type
DUNS #
City
Los Angeles
State
CA
Country
United States
Zip Code
90095