This research seeks to establish a preferred interconnection network for massively parallel computers of the future. Such a network must have modestly increasing hardware requirements as the number of processors increases but also must efficiently support general patterns of communication. Hardware cost is equated with physical space under a model accounting for minimum width and spacing rules for wires. This is in keeping with the common use of area as a cost measure for chips. Particular attention will be given to ensuring that general-purpose communication capabilities hold up even when interprocessor distances are large and when faults are introduced. This work will employ both mathematical proof and simulation of proposed networks. The choice of a preferred general-purpose interconnection network will allow for the development of more portable and long-lasting parallel algorithms and programs.