This project investigates how a new processor paradigm (multi-core architectures) changes the way Parallel Discrete Event Simulation (PDES) is done. This topic is important given the wide use of simulation and the emergence of multi-core architectures. PDES is likely to play an increasingly important role in discrete event simulation as Moore?s Law is sharply curtailed and explicit parallelism becomes the major avenue for improving performance of sequential applications. Improving PDES performance translates to improved.

Discrete Event Simulation (DES) is widely used for performance evaluation in many application domains. The fine grained nature of PDES causes its performance and scalability to be limited by communication latency. The emergence of multi-core architectures and their expected evolution into manycore systems offers potential relief to PDES and other fine grained parallel applications because the cost of communication within a chip is dramatically lower than conventional networked communication. Absent this dominant effect, PDES performance will be determined by issues such as load balancing, synchronization and optimism control, and the choice and configuration of various other algorithms and data structures of the simulator. Operation in a manycore environment introduces new system tradeoffs that must be effectively balanced by the system software. Primarily, the pressure on the memory system and resilience to load fluctuations will emerge as critical issues that we address in the proposed research. Finally, the more predictable nature of communication cost in this environment (due in part to the more frequent synchronization possible between nearby cores) can be exploited, especially by static analysis, for effective simulation.

As multi-cores become the default microprocessor architecture, applications that are performance constrained must evolve to use parallelism to take advantage of the resources available on the cores. This project?s new PDES can have a significant impact on a number of applications that rely on discrete event simulations. The PIs plan to incorporate the research results into a graduate level course on parallel simulation techniques and to involve undergraduate students in the project.

Project Report

Simulation is a critical capability used in the design and evaluation of systems across a wide-range of domains. Parallel Simulation can improve the performance and capacity of simulation, allowing us to study larger models in more details and for more scenarios. In this project, we explored how to improve the performance of parallel discrete event simulation on emerging multi-core and many-core computing systems. We identified and characterized the bottlenecks and developed new algorithms and optimizations that improve the performance of parallel simulation significatnly on these platforms. We looked at how communication support can be improved to take advantage of the memory hierarchy available on such systems. We explored these issue son three different multi-core architectures with significantly different designs. We developed approaches to manage the high cost of communication across a network of multi-core systems. We also looked at how we can analyze the model being simulated and take advantage of its properties to improve the simulation performance. We explored how to make simulation more effectively manage interference from other co-located applications. These algorithms, optimizations and the experiences gathered while developing and evaluating them represent the intellectual merit of the project. The developed techniques were integrated within the ROSS simulation engine and made available to other researchers. This includes both core simulation and communication algorithms, synchronization optimizations, and memory optimizations. We also make available our partitioning algorithms that are based on the simulation model analysis. Finally, we also make available our work to thread remapping approaches for managing intereference from other applications. Experiences with all these investigates have been published and will inform the work of other researchers in parallel simulation, parallel computing in general, as well as manycore architecture design. The work results have been dessiminated in 11 scientific publications in top journals and conferences in the area of parallel simulation. The project lead to the training of one PhD student and one MS student in this important area. It also supported through REU supplements, three undergraduate students who also got trained in this area. Educational material was developed and used in the graduate computer architecture class based on the results of the project. These represent the broader impacts of the project.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
0916323
Program Officer
M. Mimi McClure
Project Start
Project End
Budget Start
2009-09-01
Budget End
2013-08-31
Support Year
Fiscal Year
2009
Total Cost
$358,031
Indirect Cost
Name
Suny at Binghamton
Department
Type
DUNS #
City
Binghamton
State
NY
Country
United States
Zip Code
13902