Cache hierarchies, heretofore rarely used, will become common in future shared-memory computers because: (1) processors are speeding up relative to main memories, (2) on-chip caches are useful but not large enough to obviate the need for off-chip caches and (3) specializing caches to reduce interconnection bandwidth is important to shared- memory multiprocessors. Secondary caches, those caches not adjacent to processors, need to be studied since they operate in a different system context than level one caches (those adjacent to processors). Few studies have examined secondary caches. Secondary caches, for example, need only do block transfers and are not referenced on every cycle as do level one caches. These and other differences between the operational requirements of secondary and level one caches should lead to significantly different implementations. Several aspects of secondary cache organization are examined including: (1) new methods for implementing set-associativity, (2) how cache coherency invalidates affect optional cache organization, and (3) when tag or data memory should be implemented with dynamic RAM. The methods used are analytic modeling, trace-driven simulation and trial implementations.