Computing is changing dramatically, particularly for cloud-based service providers such as Facebook, Google, and Amazon. On-line service applications, such as social networking and search, place unique demands on processor memory systems. In particular, these "big-memory" applications have working data sizes several orders of magnitude beyond those found in the workloads typically used in computer design research. As a result, these applications place different stresses on processor memory systems. Simultaneously, new, non-volatile memory (NVM) technologies such as Phase Change Memory (PCM), spin-transfer torque random access memory (STT-RAM), and memristors are emerging for use as a replacement for or augmentation to traditional dynamic RAM (DRAM) main memory. These new memory technologies promise higher capacities and fast access times along with non-volatility (data retention when the power is off). As a result, they have the potential to bridge the gaps in current processor memory systems for both data capacity and speed requirements, leading to new usage models, such as storage class memories or combined main memory and storage implementations. These trends together argue for new memory systems architectures, designed for the challenges of big-memory applications, leveraging new memory technologies together with traditional DRAM and emerging process techniques such as 3-D die stacking.

This research will characterize big-memory applications in light of future availability of much larger and nonvolatile memories closer to the processor. It will study the implications of these applications on emerging memory architectures in terms of organization, hierarchies, and other structural and management questions. In particular, this research focuses on the development of the following: 1) Memory architectures for big memory applications, leveraging emerging technologies, such as 3-D die stacking and new, byte-addressable, dense non-volatile memories; 2) Deeply speculating instruction and data prefetchers for big-memory applications; 3) Cache policies that proactively manage performance, power, and reliability in memory systems for future big memory applications utilizing NVM; 4) New memory translation microarchitectures to meet the needs of big-memory applications and storage-class main memories; and 5) Quality-of-service policies to manage memory placement based upon usage in future, hybrid, and composite memory systems composed of DRAM and new NVM technologies. The educational impact of this research will include training graduate and undergraduate students with valuable research skills while advancing the state of the art in computer architecture and distributed systems, contributing to the technology workforce.

Project Start
Project End
Budget Start
2013-09-01
Budget End
2017-08-31
Support Year
Fiscal Year
2013
Total Cost
$439,802
Indirect Cost
Name
Texas A&M Engineering Experiment Station
Department
Type
DUNS #
City
College Station
State
TX
Country
United States
Zip Code
77845