At the heart of most algorithms for distributed and randomized algorithms is a Markov chain that 'visits and samples' a subset of the nodes in the graph and which satisfies the following three properties: the answer derived from walking the graph converges, the transition probabilities for next state only depend on locally available information and the convergence is 'fast-mixing' or efficient. The proposal will develop approaches for improving the mixing time of Markov Chain Monte Carlo (MCMC) algorithms. These approaches are then to be applied as a building block for algorithms in three distinct areas -- (1) sampling large graphs (2) wireless multihop scheduling and (3) duty cycling in wireless sensor networks.
Broader Impact: MCMC-like approaches underly algorithms for a wide range of socially and economically important problems and that progress in improving the mixing time will have significant impacts. Additionally, the proposal highlights student mentorship and interdisciplinary course development.