Election mechanisms are broadly used in computational settings, including a rapidly expanding range of applications in multiagent systems. Twenty years ago, responding to results showing that all reasonable elections systems can be manipulated, Bartholdi, Orlin, Tovey, and Trick proposed protecting election systems from manipulation by making the attacker's task computationally prohibitive, e.g., NP-hard. Their work started a rich line of research, yielding many such computational protection results.
However, a number of weaknesses in this approach have emerged: (1) Much of the work assumes that voters have complete, transitive preferences and that the manipulator has perfect knowledge of the preferences of each voter. (2) Many election systems have polynomial-time algorithms for perfect manipulation and so cannot be computationally protected. (3) Even when there are NP-hardness results, these assume all ensembles of voters are possible, and it has recently been shown that when one looks at, for example, voters obeying the common behavior model known as single-peakedness, these NP-hardness results often evaporate. (4) Even when there are NP-hardness results, they are worst-case results, and so it is possible that often-correct heuristic attacks exist.
This project will respond to these weaknesses, rebuilding the computational approach to protecting elections and more rigorously delineating its limitations. More natural and realistic models will have strong consequences in terms of the weaknesses discussed above. Complexity theory and algorithmics will both be developed in a broad investigation of the above weaknesses and techniques to go beyond them to retain the value of using complexity theory to protect election systems against manipulation.