This project will deploy a pilot Software-Defined Science Network (SDSN) to interconnect research computing resources on the Duke campus and link them to national WAN circuit fabrics. This pilot deployment will support both Computer Science applications such as GENI and domain sciences such as Duke's Institute for Genome Sciences and Policy (IGSP) and Duke's high-energy physics group (HEP). Additionally, the SDSN will connect to the Duke Shared Cluster Resource (DSCR), a 4300-core shared batch job service used by domain scientists all over the Duke campus. The key goal is to enable domain scientists to request virtual networks that are 'simple and scalable enclaves' for science networking, and that link selected resources on campus with selected resources outside, while excluding unrelated traffic.

The project will experiment with OpenFlow controllers on a trial basis within isolated flowspace slices of the SDSN, including OpenFlow-enabled traffic engineering policies that offload science traffic onto the SDSN. The project plan is that initial trial demos will exercise cloudbursting capability to expand computing service into a cloud site, and (potentially) to support virtual machine migration among the OpenStack cloud testbeds on campus. The PIs will report on their experience in technical papers. Travel budget is included for presentation of learnings to other university CIO/CTOs. Although software-defined networking (SDN) technologies are currently being widely discussed and are key elements in the GENI architecture, there is little operational or campus-level architectural experience with using them. The project will advance the state of the art in integrating SDN technologies into campus networks, and in enabling safe, controlled interconnection of science resources and GENI resources within and across campuses. The project seeks to devise and implement practical solutions that are easily reproducible beyond the initial prototype, scalable to wider use, and grounded in technologies that are (or soon will be) solid, manageable, and commercially available for deployment throughout production campus networks. The project outcomes include reporting of results and lessons to other campus network operators and to SDN researchers and industry

Broader Impact: The pilot will provide an opportunity to gain experience with architectural, deployment, administrative and operational issues of OpenFlow in campus settings to serve research and education needs beyond the Computer Science domain. The Duke campus OpenFlow model (GENI-derived technology) offers domain sciences on-demand access to ultra-high-speed networks without performance limiting firewalls. As such it will provide direct benefit to the domain sciences. The PIs will report on issues and operational experience associated with the deployment. These reports and the PI's willingness to share their experience with other universities will reduce barriers to use of GENI from campuses, establish GENI technologies and OpenFlow as building blocks for science networking, enhance support for computational science on the Duke campus, and facilitate sharing of resources and data among science researchers and their collaborators on and off campus.

Project Report

This project established an initial deployment of an advanced science network that interconnects research resources on the Duke campus and links them to national and international research networks. The outcomes of the project support data-intensive scientific collaboration and enhanced access to computational resources, including the "virtual cloud computing" clusters increasingly deployed on campuses. The two-year project responds to a demonstrated need: scientific research today has networking demands that are not met by "general purpose" campus networks. The key technical challenge was to integrate the new capabilities of the science network seamlessly with the existing campus network. In particular, computers attached to the science network retain their access to the campus network; the choice of which network to use for any given traffic flow is driven by central policy, without burdening scientists or system administrators to reconfigure their computers. The project's approach leveraged an emerging technology called Software-Defined Networking (SDN), based on the OpenFlow standards. At the start of the project, vendors of network switch equipment were just beginning to support SDN, which promised to allow for nimble and deft configuration of networks by software programs (controllers) that run on servers attached to the network. An overarching goal of the project was to learn how to take advantage of the power of SDN on Duke’s campus, both from an administrative and operational standpoint and as a practical technology to enhance security and accelerate scientific collaboration and discovery. The project team evaluated SDN switches from several vendors and experimented with several open-source platforms for controller software (Floodlight, POX, OpenDaylight, Ryu), built an initial science network from SDN switches that met our requirements, linked it to the campus network, developed and deployed a controller (based on Ryu) to control the new SDN science network, and tested connectivity and new network functions. The initial pilot site was a cloud computing cluster on the Duke campus that is part of ExoGENI, a national cloud testbed built under NSF’s Global Environment for Network Innovation (GENI) initiative. The project upgraded Duke’s ExoGENI cluster and connected it to the new SDN science network. In addition, the project was conducted in concert with a related NSF-sponsored project (CC-NIE: Network Infrastructure, OCI-1246042), which began during the first year of funding. Among other goals, the ongoing CC-NIE project will expand the prototype SDN science network to additional sites on campus and integrate new monitoring functions (based on the perfSONAR tool). The new SDN science network enables "on the fly" changes to the policies that control how traffic moves through the network. In particular, authorized members of specific labs or workgroups can control how traffic moves between their systems, by mutual consent. By default, traffic between groups moves through the general campus network, where it is subjected to various security checks. These checks slow traffic and are unnecessary for scientific data transfers among collaborating groups on campus. To accomplish this, the project developed a Web service called Switchboard that devolves policy control to designated personnel, who are empowered to request fast paths through the SDN science network for traffic moving between systems that they control and specific other systems on campus. Switchboard authenticates their requests using the NSF-funded Shibboleth identity management infrastructure and checks their permission against its database. It then registers approved policy changes with the SDN controller, which seamlessly reconfigures the network to effect the change by installing rules in the SDN switches. These rules provide an "on-ramp" for the designated traffic to the fast SDN science network. These new SDN services also enable campus personnel to connect their systems with dynamic virtual networks, including networks hosted on the ExoGENI cloud, high-speed network circuits from national network backbone fabrics, and other GENI resources. The project members worked closely with staff at RENCI to link the Internet2 Advanced Layer 2 Service (AL2S) to the Duke campus, where it connects into the new SDN science network and from there to Duke’s ExoGENI cluster and other science resources on campus.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Network Systems (CNS)
Type
Standard Grant (Standard)
Application #
1243315
Program Officer
Joseph Lyles
Project Start
Project End
Budget Start
2012-08-01
Budget End
2014-07-31
Support Year
Fiscal Year
2012
Total Cost
$300,000
Indirect Cost
Name
Duke University
Department
Type
DUNS #
City
Durham
State
NC
Country
United States
Zip Code
27705