In conjunction with our development of new methods we are actively engaged in developing or contributing to the development of several simulation and modeling software packages. We make our development efforts freely available so as to maximize the impact of our work. In 2018 we have continued our tech tracking efforts for the LoBoS HPC cluster. A system based on AMD's Epyc processor (purchased in 2017) was used to benchmark CHARMM and Amber; however, its performance was found to be in general inferior to Intel processors. In addition, we benchmarked an Intel Knights Landing system and found its performance for Amber/CHARMM to be inferior to both AMD and other Intel processors; Intel has since discontinued the Knights Landing platform. Based on these results, we've chosen to continue purchasing Intel CPUs for our latest cluster refresh. LoBoS acquired 36 compute nodes based on Intel's Skylake processors, as well as an additional analysis node based on Skylake. This will enable continued optimization of CHARMM, Amber, and other codes using Intel's AVX-512 instruction set. This year also saw the completion of a multi-year effort to relocate the physical infrastructure of LoBoS a new computer room located on the main NIH campus in Building 12. This facility can support over 150% of the capacity of the previous leased facility. In addition, the new facility has a much more reliable power infrastructure, which has greatly increased the uptime of the LoBoS system avoiding disruption on the simulations that are running in the cluster and therefore in the research that has been carried out by our lab. The proximity of the new facility to the existing LoBoS room on the second floor of Building 12 (which hosts the main LoBoS networking infrastructure and data storage) allowed us to quadruple the network bandwidth between the rooms. The latest versions of CHARMM and Amber (18) have been installed, benchmarked, and tested on LoBoS. In terms of user support we made publicly available the wiki of LoBoS that helps researchers inside and outside of the NIH. We also made world wide accessible a new ticketing system that help us to address the problems that the users experience in a timely fashion manner. Interfacing CHARMM and Psi4 Our long standing collaboration with Alex Mackerells group has recently seen the introduction of an interface between the CHARMM molecular dynamics package and the Psi4 quantum chemistry code, both of which are developed in our group. We are building on this initial implementation by implementing efficient screening techniques in Psi4 to accelerate the evaluation of integrals and extending to polarizable force fields by implementing a Drude solver in Psi4. Implementation of MPID for graphics cards After showing the existing CHARMM polarizable force field can be formulaically cast into a framework of multipoles and induced dipoles (MPID), we recently programmed an implementation of MPID that can run on graphics cards, greatly speeding up simulations with this force field. Although a great step forward, we are continuing work to obtain further gains by separating the treatment of atoms based on their possession of multipoles. The resulting code is freely available as an OpenMM plugin and is being used in our group to speed up our efforts to seek alternate parameterizations of the CHARMM Drude force field. Improved Enveloping Distribution Sampling Functionality in CHARMM Enveloping distribution sampling (EDS) is a method that allows the estimation of free energy differences between two or more states via simulation of a reference state that encompasses all states of interest. This method has been applied to a wide variety of problems, including by this lab in conjunction with Hamiltonian replica exchange molecular dynamics (HREMD) to facilitate calculation of free energies at constant pH. However, the code as currently written does not make efficient use of large processor core counts which limits the size of the systems that can be simulated as well as how long they can be simulated for. The EDS functionality in CHARMM has now been rewritten to use the underlying parallel framework (MPI) more effectively, resulting in a speedup that scales with the number of processors used, i.e. the more processors are employed, the faster the new code is over the old code. For example, using 2 processes the new code has a speedup of 1.3x, but using 12 processors the new code has a speedup of 6.6x. In addition, the new code has been written so that it seamlessly integrates with the new replica exchange molecular dynamics framework in CHARMM (REPD). This code is currently undergoing rigorous beta testing and will be submitted for the next major release of CHARMM. Development of Enhanced Analysis Capabilities for CPPTRAJ Analysis Software CPPTRAJ is a molecular dynamics (MD) trajectory analysis program widely used in the MD community that has support for many MD software packages including Amber, CHARMM, NAMD, and Gromacs. CPPTRAJ is under continual development to improve its utility for the MD community. Recently, CPPTRAJ has had the ability to treat long-range electrostatics via the particle mesh Ewald method added, allowing it to calculate full system energies. In conjunction with the existing parallel trajectory processing framework, this allows the rapid calculation of energies on large amounts of structures, which is important for e.g. conducting decoy analysis when evaluating new force field parameters. In addition, the existing parallel framework is being extended to support large amounts of analyses to be performed on enormous data sets simultaneously. This allows for example the rapid calculation of many 2D RMSD plots which can then be used to determine Hausdorff distances, which can ultimately be used to quantify how similar two separate MD trajectories are. The extension of CPPTRAJs parallel framework to analyses allows it to better utilize large-scale HPC resources such as LoBoS and Biowulf. Development of the ForceSolve code: Through the various applications of the force matching software ForceSolve'', the intramolecular force matching interface has evolved dramatically. Changes include constraining parameters obtained to be positively defined, inclusion of matching improper dihedrals, the ability to generate parameters based on connectivity (e.g., include all possible improper dihedrals and Urey-Bradley terms), perform matching based only on parameters selected (e.g., terms already present in the original force field as well as inclusion/exclusion of terms basedon a selection interface). P21 periodic boundary condition in CHARMM DOMDEC Eighth shell method has previously been shown to be the most optimal in terms of parallelization over large number of nodes. However, this method supports only the P1 periodic boundary condition and cannot handle rotational symmetry. We developed the Extended Eighth shell method to handle the P21 PBC. This method simulates only the asymmetric unit and communicates coordinates and forces with images that correspond to P21 PBC. For the long range interactions, an image of the asymmetric unit is appended along the axis of symmetry. Reciprocal space calculations for the the unit cell can be done using the regular colfft module of CHARMM. P21 PBC has application in lipid bilayer simulations as it allows the movement of lipids from one layer to the other, thus balancing the chemical potential difference between the two layers. The EMAP facility in CHARMM has been also ported to Amber (version 16) to assist in determining Cryo-EM structures.

Project Start
Project End
Budget Start
Budget End
Support Year
21
Fiscal Year
2018
Total Cost
Indirect Cost
Name
U.S. National Heart Lung and Blood Inst
Department
Type
DUNS #
City
State
Country
Zip Code
Eastman, Peter; Swails, Jason; Chodera, John D et al. (2017) OpenMM 7: Rapid development of high performance algorithms for molecular dynamics. PLoS Comput Biol 13:e1005659
Simón-Carballido, Luis; Bao, Junwei Lucas; Alves, Tiago Vinicius et al. (2017) Anharmonicity of Coupled Torsions: The Extended Two-Dimensional Torsion Method and Its Use To Assess More Approximate Methods. J Chem Theory Comput 13:3478-3492
Parrish, Robert M; Burns, Lori A; Smith, Daniel G A et al. (2017) Psi4 1.1: An Open-Source Electronic Structure Program Emphasizing Automation, Advanced Libraries, and Interoperability. J Chem Theory Comput 13:3185-3197
Meana-Pañeda, Rubén; Xu, Xuefei; Ma, He et al. (2017) Computational Kinetics by Variational Transition-State Theory with Semiclassical Multidimensional Tunneling: Direct Dynamics Rate Constants for the Abstraction of H from CH3OH by Triplet Oxygen Atoms. J Phys Chem A 121:1693-1707
Tan, Ming-Liang; Tran, Kelly N; Pickard 4th, Frank C et al. (2016) Molecular Multipole Potential Energy Functions for Water. J Phys Chem B 120:1833-42
Konc, Janez; Miller, Benjamin T; Štular, Tanja et al. (2015) ProBiS-CHARMMing: Web Interface for Prediction and Optimization of Ligands in Protein Binding Sites. J Chem Inf Model 55:2308-14
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T et al. (2015) Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface. J Comput Chem 36:62-7
Pickard 4th, Frank C; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing II: Coarse-grained protein folding. PLoS Comput Biol 10:e1003738
Miller, Benjamin T; Singh, Rishi P; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing I: Lessons and tutorial. PLoS Comput Biol 10:e1003719
Perrin Jr, B Scott; Miller, Benjamin T; Schalk, Vinushka et al. (2014) Web-based computational chemistry education with CHARMMing III: Reduction potentials of electron transfer proteins. PLoS Comput Biol 10:e1003739

Showing the most recent 10 out of 15 publications