The distinction between computing and communication has blurred with the improvements in semiconductor technology as well as the speed and reliability of computer networks. At a macroscopic level, individuals increasingly rely upon smart phones, cloud computing, and similar infrastructure that combines computing systems with networking to accomplish their daily tasks. Many end-users even consider the client device, the server, and the network as one blended unit that is used for productivity and/or entertainment. At the microscopic level, the linkage between the two areas ideally should not require the use of special-purpose software, because installing software on a network node to enable this blending can potentially cause the node to be unstable or more complex, and potentially introduce security flaws. However, linking the two domains without the use of additional software remains a significant challenge. This research focuses on understanding and characterizing the connection between the computing node and the network to meet that challenge. For example, by simply probing a node's network traffic and collecting its responses, it can be determined that the internal components (e.g., microprocessor) are heavily utilized. If the node is expected to be idle, then this indicator could signal that the node has been compromised and is running unauthorized software.   This project uses a holistic approach that combines computer architecture and computer networking to investigate and characterize how the microarchitecture affects the network packet generation process. Since the internal components of a node are shared resources among all processes, including those that require network-based I/O, it is possible to infer the load on the internal components by observing variations in delay between successive network packets that are generated by the node. This inference materializes as a "delay signature," and can be used to blend the areas of architecture and networking. The delay signature provides information that can be attributed to the internal state and settings of the microarchitecture. Architectural settings, such as processor affinity, multi-threading, and power-saving modes, affect the delay signature. The PIs use a hardware testbed and a system simulator to characterize and model the basic system components that can have a significant (direct or indirect) impact on packet generation. The project culminates with the creation of a general-purpose engine that automates the detection of utilization signatures and uses those signatures to predict node utilization remotely. The PIs incorporate team-based laboratory projects within their computer architecture and computer networking courses to demonstrate the relationship between the two domains and to promote integrated learning by students in both areas. Further, the investigators employ an outreach plan with complementary components to: (1) encourage students to enter the science and engineering fields and (2) mentor potential faculty members to serve as educators and role models. Potential applications of the delay signature include: (1) providing security for networked nodes by monitoring unauthorized utilization and (2) providing efficient scheduling in cluster grids.

Project Start
Project End
Budget Start
2013-08-01
Budget End
2016-07-31
Support Year
Fiscal Year
2013
Total Cost
$250,000
Indirect Cost
Name
Georgia Tech Research Corporation
Department
Type
DUNS #
City
Atlanta
State
GA
Country
United States
Zip Code
30332