The ever-increasing computational power has instigated rapid development of computational techniques for simulating large stochastic interacting particle systems. Growing computing capabilities have helped to obtain unprecedented insight into numerous problems ranging from physical phenomena in materials, chemical reactions, and biological processes to image processing. However, as is common in the initial development of simulation methodologies the rapid emergence of new computational techniques far outstrips theoretical understanding of the algorithms. From the computational point of view, balancing the competing aims of numerical accuracy and computational efficiency still remains one of the central problems in simulations of large multi-scale systems. The proposed research outlines a framework for the numerical analysis and implementation of simulation algorithms based on coarse-graining of the microscopic system. In the proposed work we view coarse-graining as numerical approximation of coarse observables which would have to be estimated from computationally expensive microscopic simulations. The goal of the proposed work is to develop numerical tools for assessing the quality of the approximation and to use error indicators in order to implement reliable simulation algorithms. Understanding approximations and their limitations is particularly important in the context of stochastic simulations, since even a convergent simulation may not provide any reliable insight into simulated phenomena (e.g., phase transitions, critical phenomena). Furthermore, the computational complexity can be substantially decreased if a trade-off between accuracy and efficiency is properly adjusted. In simulations of systems with a large number of interacting entities we often end up estimating average values of certain observables that depend on randomly distributed microscopic configurations of the system. In the proposed approach we apply hierarchical microscopic-macroscopic computational paradigms and explore their potential for improving efficiency, reliability, and algorithmic complexity of methods used for sampling probability measures on high-dimensional spaces. We aim at developing tools suitable also for approximating transient behavior which may not be properly captured, on experimentally relevant time-scales, by sampling the equilibrium distribution. The proposed framework will derive nested approximations of the underlying multi-scale system with optimal interactions between different scales. This scale decomposition strategy will be applied to the development of parallel algorithms for simulating stochastic systems composed of a large number of degrees of freedom. The proposed work is in the intersection of numerical analysis, stochastic processes, and statistical mechanics. The potential impact of the proposed mathematical topics encompasses a wide range of applications in physics, chemistry, materials science, and other fields where large multi-scale simulations are used. Therefore, one of the objectives is to implement flexible tools that can be tailored to specific problems and existing packages within a short learning curve.