Hybrid von Neumann-dataflow architectures are an attempt to combine hardware support for synchronization and latency hiding with von Neumann features such as register files, instruction and data caches, RISC style instruction pipelining and vector data structures. The success of a hybrid architecture design lies in its ability to provide a balance between the dataflow and von Neumann features that will optimize its performance. This research provides both quantitative and qualitative analysis of these features to determine such a balance. In the first step, a quantitative evaluation of the relative extent of the various forms of locality is performed. Secondly, the architectural features, pertaining to processor design and storage model, that can exploit these forms of locality are determined. The impact of these features on performance, code generation strategies and instruction set design is evaluated. Based on a given processor and machine architecture, code partitioning and structure allocation strategies are determined and evaluated. The hybrid dataflow model is evaluated using a parameterizable, discrete event driven, register transfer level dataflow machine simulator.