Migration of CHARMM to GPUs Graphical Processing Units (GPUs) are modern processors with the ability to support the concurrent execution of thousands of threads at the same time. In this work we have redesigned the CHARMM codebase from a heterogeneous CPU-GPU architecture to a GPU-only architecture. This design avoids the communication of forces and coordinates between the host and device memory at every step of the simulation. As coordinates are only available in the device memory, most of the features are being reimplemented for the GPU. Several features of CHARMM have been implemented/optimized to utilize the underlying processor threads efficiently. Another important feature of the new implementation is the focus on modularity of the code to support easy extension and adherence to the state of the art software development best practices. Psi4 Our quantum mechanical development efforts in the Psi4 package have seen improvements in QM/MM capabilities, providing an open source solution to use with the CHARMM code. Through vastly improved integral screening and parallelization, we have removed the major bottlenecks in performance that were previously present when electrostatically embedding QM regions inside large MM domains. We have also implemented analytic DFT hessians, enabling many types of analysis. Ongoing developments include a multi-precision algorithm to determine circular dichroism using GPUs and constrained density fitting approaches, which ensure neutral charge; this is crucial for periodic systems. P21 periodic boundary condition in CHARMM The Eighth shell method has previously been shown to be the most optimal in terms of parallelization of molecular dynamics simulation over large number of nodes. However, this method supports only the P1 periodic boundary condition (PBC) and cannot handle reflection and/or rotational symmetry. In this work we developed the Extended Eighth shell (EES) method that simulates only the asymmetric unit and communicates coordinates and images with images that correspond to P21 PBC. The P21 periodic boundary condition has application in lipid bilayer simulations as it can be used to allow the movement of lipids from one layer to the other, thus balancing the chemical potential difference between the two layers. Development of the Action-CSA Method in CHARMM Finding a reaction pathway that connects two well-defined end states is a challenging and important problem in molecular simulation. Even though molecular dynamics simulations on special purpose hardware like GPUs can now routinely reach microsecond time scales relatively quickly, brute force searches are simply not efficient. Recently, our lab developed the Action-CSA search method, in which multiple pathways connecting two end states are found via global optimization of the Onsager-Machlup action by the conformational space annealing (CSA) algorithm. Although this method was successfully used to find pathways in a variety of systems, the initial implementation in the software package CHARMM was done in an unsustainable way, and was not easily distributed to the wider MD community. The Action-CSA method is now being re-written in the latest version of CHARMM with the aim of making it applicable to a wider variety of problems, robust, and simple to use.
We aim to integrate this code into the next major release of CHARMM. Development of CPPTRAJ Analysis Software CPPTRAJ is a molecular dynamics (MD) trajectory analysis program that is widely used by the MD community. It can process data from a variety of MD software packages including Amber, CHARMM, NAMD, and Gromacs. CPPTRAJ is under continual development to improve its utility for the MD community. Some recent improvements to CPPTRAJ include 1) the ability to treat long-range Lennard-Jones interactions via the particle mesh Ewald method; 2) the development of a robust method for calculating lipid order parameters from CHARMM or Amber simulations; 3) the ability to process more simulation data from CHARMM including more coordinate formats, replica exchange data, and energies; 4) the ability to process constant pH simulation data from Amber simulations. Implementation of improved CHARMM force field support into OpenMM The OpenMM molecular simulation code is optimized for GPU-accelerated molecular dynamics simulations. It provides great flexibility by allowing user-defined potential energy functions and integrators which allows rapid exploration of novel simulation approaches on large molecular systems. While CHARMM force fields and input files are in principle supported by OpenMM, some important features, for example free energy calculations were missing. We augmented the parser for CHARMM PSF files to better resemble CHARMMs standard behavior and implemented CHARMMs van-der-Waals switching functions vswitch and vfswitch as well as support for alchemical free energy simulations using CHARMM input files into openmmtools. A computational framework for calculating position-dependent diffusion and free energy profiles through membranes Calculation of membrane permeabilities via the inhomogeneous solubility-diffusion (ISD) model requires accurate calculations of the position-dependent free energy and diffusion profiles of permeants though membranes. We developed a software framework to calculate these profiles from biased and unbiased simulations by employing a maximum-likelihood approach to the ISD equation. In recent years, this lab has developed a series new compuatational methods, such as the self-guided Langevin dynamics for efficient conformational searching and sampling, the isotropic periodic sum method for accurate and efficient calculation of long-range interactions, and the map-based modeling tool, EMAP, for electron microscropy studies. Implementation of these new methods enables researchers to tackle difficult problems. These methods have now also been implemented into another widely used simulation package, AMBER, to extend the user scope to access these methods. The SGLD, IPS, and EMAP methods are available in AMBER version 16. LOBOS In 2019, we have continued to upgrade the compute capabilities of LoBoS by both increasing the number of existing nodes available to all users as well as adding new nodes. We have increased the pool of nodes available to all users by moving from two separate queuing systems (PBS/SLURM) to a single SLURM queue. This has allowed us to increase the overall usage of all nodes as well as provide more flexible scheduling to all users. In addition, we have purchased 25 new GPU nodes, each containing two Nvidia Tesla V100 GPUs. The performance of GPU-capable software (e.g. Amber and OpenMM) on these nodes is approximately double that on our previous generation TitanXP GPU nodes. To take advantage of this compute power we are continuing to modify our toolchain to run well on GPUs: as part of this effort we have added a flexible Nos-Hoover integrator to the OpenMM package, which permits the use of conventional and Drude-polarizable force fields on the V100 cards. We have increased our archive storage capacity by 750 TB, which will enable us to meet data retention regulations while keeping the data accessible to lab staff for use in derived analysis. We have also added additional network security controls as well as modified our network architecture to adapt to government security regulations while allowing our continued direct collaboration with outside groups. In support of code development, we have configured one of our analysis nodes with settings that enable code profiling using the Intel Cluster suite which we acquired this year. A CUDA-capable continuous integration server was set up in conjunction with our in-house Gitlab server, allowing regular testing of our software development efforts (include new GPU developme

Project Start
Project End
Budget Start
Budget End
Support Year
22
Fiscal Year
2019
Total Cost
Indirect Cost
Name
National Heart, Lung, and Blood Institute
Department
Type
DUNS #
City
State
Country
Zip Code
Eastman, Peter; Swails, Jason; Chodera, John D et al. (2017) OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comput Biol 13:e1005659
Simón-Carballido, Luis; Bao, Junwei Lucas; Alves, Tiago Vinicius et al. (2017) Anharmonicity of Coupled Torsions: The Extended Two-Dimensional Torsion Method and Its Use To Assess More Approximate Methods. J Chem Theory Comput 13:3478-3492
Parrish, Robert M; Burns, Lori A; Smith, Daniel G A et al. (2017) Psi4 1.1: An Open-Source Electronic Structure Program Emphasizing Automation, Advanced Libraries, and Interoperability. J Chem Theory Comput 13:3185-3197
Meana-Pañeda, Rubén; Xu, Xuefei; Ma, He et al. (2017) Computational Kinetics by Variational Transition-State Theory with Semiclassical Multidimensional Tunneling: Direct Dynamics Rate Constants for the Abstraction of H from CH3OH by Triplet Oxygen Atoms. J Phys Chem A 121:1693-1707
Tan, Ming-Liang; Tran, Kelly N; Pickard 4th, Frank C et al. (2016) Molecular Multipole Potential Energy Functions for Water. J Phys Chem B 120:1833-42
Konc, Janez; Miller, Benjamin T; Štular, Tanja et al. (2015) ProBiS-CHARMMing: Web Interface for Prediction and Optimization of Ligands in Protein Binding Sites. J Chem Inf Model 55:2308-14
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T et al. (2015) Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface. J Comput Chem 36:62-7
Pickard 4th, Frank C; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing II: Coarse-grained protein folding. PLoS Comput Biol 10:e1003738
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing I: Lessons and tutorial. PLoS Comput Biol 10:e1003719
Perrin Jr, B Scott; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing III: Reduction potentials of electron transfer proteins. PLoS Comput Biol 10:e1003739

Showing the most recent 10 out of 15 publications