The LoBoS high performance computing cluster continues to evolve as a scientific computing resource for the Laboratory of Computational Biology. Improvements to the cluster are largely driven by continued improvements in the price-performance ratio of common off the shelf workstation, server, and networking hardware. In FY-2011, 96 new nodes were added to the cluster and are currently in testing. These nodes are based on processors with Intels Westmere microarchitecture. Additionally, four of these nodes have Graphics Processing Units (GPUs) that are suitable for general purpose computing. CHARMM has been ported to run on the GPU architecture, and the deployment of these nodes will enable future research and production work in the lab related to running molecular simulation codes on this architecture. Various improvements to the CHARMM molecular simulation package have been made in the lab. The conversion of the code to Fortran 95 has largely been completed, and plans for continued structural and performance improvements are being made in collaboration with other CHARMM development sites. The MSCALE module for generalized multiscale computing has been enhanced to work with other simulation packages such as AMBER, TINKER, and SSDQO. Other current work in the Laboratory of Computational Biology involves enhancements to replica exchange and a new module for constant pH simulations. The CHARMMing graphical user interface to CHARMM is also continuously developed. Current work primarily focuses on improving the internal structure of the code and making it easier for other members of the CHARMM community to make improvements to this software. Other work involves the development of a generalized Python library for interfacing with CHARMM and improvements to the module that performs oxidation/reduction calculations on Fe-S clusters to make it more flexible and robust. Furthermore, improvements in visualization and user interface have been planned. Multi-scale modeling has become increasingly important for modeling complex biochemical processes. Even with powerful computer hardware and software, many biological processes occur on time scales that are too long to be modeled at highly accurate levels of theory. Multiscale modeling allows for important components of the system to be studied using highly accurate techniques while other parts are modeled using less accurate but computationally cheaper methods. The MSCALE module in CHARMM has been developed to allow concurrent multi-scale simulations using CHARMM. MSCALE describes a general communication protocol that has also been implemented in AMBER, TINKER, and SSDQO codes. A Graphics Processing Unit (GPU) is a specialized micro-processor that accelerates graphics rendering. The development of application programming interfaces to support general purpose computation on GPUs led to a new era for acceleration of scientific applications. GPUs are much more cheaper and accessible than many high performance platforms, however they require significant effort for optimization. On the other hand, eXplicit Multi-Therading (XMT), general purpose parallel architecture prototype, provides high performance with ease of programming, promising scalability. Our project is implementing and improving Particle Mesh Ewald (PME) performance both on GPU and XMT, as well as Lennard-Jones and MAP-objects methods. A performance comparison of different algorithm types on both processors is a natural outcome of the project. Our FFT implementations reveal significant performance gaps on irregular memory access pattern, between XMT and GPU. The long-term goal is to successfully move molecular simulation program CHARMM onto GPU.

Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
National Heart, Lung, and Blood Institute
Zip Code
Eastman, Peter; Swails, Jason; Chodera, John D et al. (2017) OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comput Biol 13:e1005659
Simón-Carballido, Luis; Bao, Junwei Lucas; Alves, Tiago Vinicius et al. (2017) Anharmonicity of Coupled Torsions: The Extended Two-Dimensional Torsion Method and Its Use To Assess More Approximate Methods. J Chem Theory Comput 13:3478-3492
Parrish, Robert M; Burns, Lori A; Smith, Daniel G A et al. (2017) Psi4 1.1: An Open-Source Electronic Structure Program Emphasizing Automation, Advanced Libraries, and Interoperability. J Chem Theory Comput 13:3185-3197
Meana-Pañeda, Rubén; Xu, Xuefei; Ma, He et al. (2017) Computational Kinetics by Variational Transition-State Theory with Semiclassical Multidimensional Tunneling: Direct Dynamics Rate Constants for the Abstraction of H from CH3OH by Triplet Oxygen Atoms. J Phys Chem A 121:1693-1707
Tan, Ming-Liang; Tran, Kelly N; Pickard 4th, Frank C et al. (2016) Molecular Multipole Potential Energy Functions for Water. J Phys Chem B 120:1833-42
Konc, Janez; Miller, Benjamin T; Štular, Tanja et al. (2015) ProBiS-CHARMMing: Web Interface for Prediction and Optimization of Ligands in Protein Binding Sites. J Chem Inf Model 55:2308-14
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T et al. (2015) Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface. J Comput Chem 36:62-7
Pickard 4th, Frank C; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing II: Coarse-grained protein folding. PLoS Comput Biol 10:e1003738
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing I: Lessons and tutorial. PLoS Comput Biol 10:e1003719
Perrin Jr, B Scott; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing III: Reduction potentials of electron transfer proteins. PLoS Comput Biol 10:e1003739

Showing the most recent 10 out of 15 publications