Michael A. Bender, Stony Brook Martin Farach-Colton, Rutgers
In computer systems in which jobs contend for a common resource, two jobs are said to collide when they request the resource at the same time. Randomized backoff remains the method of choice for resolving such resource contention. The idea of backoff is that whenever a job collides, it retries after some random delay. Backoff is at the core of how the internet and many other complex systems work.
Given the prominent role played by randomized backoff, it is surprising that many aspects of backoff are not understood. To date, most analysis has focused on statistical arrival of jobs on relatively simple channels. The assumption of statistical arrival rates dangerously neglects the worst-case, however, because bursty and other pathological inputs are a common case. Furthermore, many actual channels are not simple.
This research aims to develop a theory of the worst-case performance of backoff algorithms under various assumptions about the channel. The research will be guided by contention-resolution problems in shared-memory applications, such as transactional memory. Transactional-memory applications have multiple channels, support jobs sizes that differ by orders of magnitude, may provide rich feedback on collisions, and could allow one job to succeed when there are conflicts.
The research will provide a solid theoretical foundation and performance model for the many applications having multiple-access channels. Without these performance models, implementations are likely to exhibit unpredictable and buggy performance. For the target application of transactional programming in shared memory, a proper understanding of contention resolution is on the critical path for creating a viable system.