Marker-propagation networks (MPNs) represent a powerful computational model especially suitable for artificial intelligence and other applications where information is often uncertain and incompletely specified. MPNs refer to networks of active nodes and labeled links. The power and novelty of this model derives from the fact that the processing instruments are programmable data patterns of flexible length called markers. Node processing is activated by the arrival of markers. Marker propagations through the network are not driven by destination addresses, instead, they depend on local conditions in the nodes, network topology and the information programmed in each marker. This is a major difference between MPNs and other parallel processing models, and it is the reason why MPNs are capable of handling weakly or incompletely specified applications. Previous work in the area of marker--propagation processing assumed much simpler models. The model studied in this project operates asynchronously, and is highly programmable. Some of the critical issues related to the implementation of MPNs on existing parallel computers are possible programming paradigms, sources of parallelism, load balancing, design tradeoffs, and others. The approach taken in this project is a quantitative one, meaning that the model is implemented on several parallel computers, performance is measured and the results compared. The MPN model offers scalability and has the potential of being a preferred architecture for future supercomputers dedicated to knowledge processing applications.