Digital simulation is becoming dominant in not only many fields of physical sciences, but computer science as well. Despite the great improvements in processing speed over the past decades, advanced research on large-scale systems requires simulation engines that are more powerful than the average desktop workstation.
The Department of Computer Sciences at the University of Texas at Austin will assemble an infrastructure for performing large-scale, memory-intensive simulations. Called Mastodon, this infrastructure will consist of a large number of rack-mounted Linux/x86 servers, each with multiple processors and several gigabytes of physical memory. Leveraged by industrial and university matching funds, we anticipate scaling Mastodon to 418 processors and nearly a terabyte of DRAM. The cluster will utilize both distributed batch scheduling using the Condor job management software, as well as soft partitioning for timing and parallel experiments.
This infrastructure will service the needs of a majority of our department's faculty. The projects utilizing Mastodon include the areas of systems (specifically, architecture, compilers, and run-time systems), network algorithms, computational biology, multi-agent robotics, and formal verification. In each of these areas, Mastodon makes multiple projects feasible, whereas with conventional computing infrastructure, most of these projects cannot even be attempted.
Broader Impact: This unparalleled simulation resource will be made available to a broader community as well, including students attending summer camps to increase female enrollment in computer science, undergraduates for both educational class project and research use, and researchers in regional departments with insufficient resources to perform high-end simulations.