While energy costs of data centers continue to double every five years, what is most disappointing is that most of this power is wasted. Servers are only busy 10-30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power. This project, being conducted at Carnegie Mellon University, investigates simple load-oblivious, distributed auto-scaling of the data center capacity to reduce power consumption. Our policies scale the number of servers that are on, as a function of changing load, shutting down servers, or putting them into sleep states, or changing the frequency at which they are run, without knowing the load in advance. Our work combines full-scale implementation in a 23-server multi-tier data center together with queueing-theoretic analysis to reduce the search space of solutions and guide the algorithmic development. We expect to develop algorithms which significantly reduce power consumption, while still meeting response time service level agreements. Broader impacts of this work include direct collaboration with Intel Pittsburgh labs, publication of a textbook on queueing-theoretic modeling and its applications, and weekly coaching and management of the Western PA American Regions Mathematics League, to train the smartest young minds of tomorrow.