This project seeks to develop new algorithmic techniques for the design of and routing in data networks. Classical network optimization problems address transportation networks that carry physical commodities. Data networks are fundamentally different --- data can be modified, compressed, and replicated at little cost. Incorporating this flexibility in altering traffic into network optimization problems can bring about a considerable improvement in the capacities of communication networks.
This project aims to revisit classical network optimization problems within a new model where the cost of carrying data is non-linear and depends on its compressibility. It aims to provide theoretical underpinnings for new WAN optimization approaches proposed in the networking literature, including, for example, traffic redundancy elimination.
The PI considers a model where links in the network can compress data and remove duplicate packets. For example, if two traffic streams contain identical data and use overlapping routes, edges carrying both streams need only transmit the common packets once; these can then be replicated at routers where the streams diverge. Accordingly, the load on an edge is a submodular function of the traffic streams that use the edge. Unlike previous work on network optimization with non-linear costs, in this model, the cost of routing depends on the identities of the traffic streams involved, and not just on the total traffic volume. Most of the problems considered are computationally intractable and the PI aims to develop approximation algorithms.
Several graduate and undergraduate courses will benefit from this project.