This research investigates the protection of computation integrity and enhancement of data privacy for applications executed on distributed computing platforms that utilize the spare processor cycles of computers connected to the Internet. The goal of the project is to provide computation supervisors with quantifiable protection and privacy mechanisms tailored to application and platform specific constraints. The project considers existing platforms, which prohibit participating computers ("participants") from communicating with other participants, as well as potential future hierarchical platforms that permit communication among participants and allow dynamic variation of platform topology. The approach to the problem of computation integrity consists of probabilistic verification and novel application of artificial intelligence mechanisms for detecting anomalous behavior. The approach to data privacy is based on obscuring data values while maintaining sufficient output information for identifying specific important data from among a vast data set. The experimental research links to education at the University of Richmond through development of special topics courses and enhancement of existing courses, and by engaging undergraduates in relevant research via funding of research fellowships, equipment, and travel to project related conferences and workshops. The results of the project will provide access to more secure computing platforms, fostering inter-disciplinary research among scientists and practitioners. Specific fields benefiting from results of this research include biology, chemistry, math, finance, medicine, and evolutionary theory. Project implementations will modify existing open source distributed computing code and will be disseminated via the project Web site (http://dvc.richmond.edu/).