Literature DB >> 26123630

SDA 7: A modular and parallel implementation of the simulation of diffusional association software.

Michael Martinez1, Neil J Bruce1, Julia Romanowska1, Daria B Kokh1, Musa Ozboyaci1,2, Xiaofeng Yu1,3, Mehmet Ali Öztürk1,3, Stefan Richter1, Rebecca C Wade1,4,5.   

Abstract

The simulation of diffusional association (SDA) Brownian dynamics software package has been widely used in the study of biomacromolecular association. Initially developed to calculate bimolecular protein-protein association rate constants, it has since been extended to study electron transfer rates, to predict the structures of biomacromolecular complexes, to investigate the adsorption of proteins to inorganic surfaces, and to simulate the dynamics of large systems containing many biomacromolecular solutes, allowing the study of concentration-dependent effects. These extensions have led to a number of divergent versions of the software. In this article, we report the development of the latest version of the software (SDA 7). This release was developed to consolidate the existing codes into a single framework, while improving the parallelization of the code to better exploit modern multicore shared memory computer architectures. It is built using a modular object-oriented programming scheme, to allow for easy maintenance and extension of the software, and includes new features, such as adding flexible solute representations. We discuss a number of application examples, which describe some of the methods available in the release, and provide benchmarking data to demonstrate the parallel performance.
© 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.

Entities:  

Keywords:  Brownian dynamics; biomacromolecular diffusion; macromolecular association; parallelization; protein adsorption; protein flexibility; protein-solid state interactions

Mesh:

Substances:

Year:  2015        PMID: 26123630      PMCID: PMC4755232          DOI: 10.1002/jcc.23971

Source DB:  PubMed          Journal:  J Comput Chem        ISSN: 0192-8651            Impact factor:   3.376


Introduction

Protein–protein diffusional association processes can occur on timescales extending to the order of seconds, beyond the scope of current all‐atom molecular dynamics (MD) simulations. This problem, and the need to sample many molecular encounters to provide reliable statistics on association kinetics, mean that simplified models are required. One approach is to model protein diffusion by Brownian dynamics (BD) simulation of rigid solute bodies in a continuum model of the solvent. Apart from eliminating the solvent's degrees of freedom, the use of rigid solutes reduces the computational cost of simulations in two main ways. First, by neglecting high frequency intramolecular vibrations, a much larger simulation time step (on the order of picoseconds, compared to the femtosecond time steps required in MD simulations) may be used, greatly reducing the number of force evaluations required to propagate the simulation trajectory. Second, intermolecular interactions can be modeled as the interaction of fixed interaction sites on one solute with a precomputed interaction potential grid on another solute, reducing the formal scaling of the calculation of intermolecular forces to O(N), with N the number of solute atoms, compared to the or algorithms in MD. This treatment of forces between solutes is used in simulation of diffusional association (SDA) and is also employed in other BD simulation software packages, namely, UHBD,1, 2 Browndye,3 BrownMove,4 and Macrodox.5 SDA is a BD simulation software package that was first reported in 1997 by Gabdoulline and Wade6, 7 for computing bimolecular rate constants for protein–protein association. The software was later extended to allow the study of protein–protein docking,8 electron transfer,9 and docking to a metal surface.10 More recently, Mereghetti et al.11 developed a variant of SDA (SDAMM) which allows the simulation of multiple macromolecules, to compute diffusional and associational properties, and investigate the effects of macromolecular crowding which occur at high macromolecular concentrations.12 Applications of SDA include studies of the effects of viral RNA binding on capsid protein association rates,13 the association of a soluble protein to a membrane bound protein,14 protein association to membrane phosphoinositide lipids,15 protein binding to cellulose fibres,16 and the characterisation of drug‐receptor encounter complexes.17 A BD method for simulating many solutes,18 based on SDA but developed independently, was also used to simulate a model of the bacterial cytoplasm.19 In this article, we report an updated and restructured version of SDA, SDA 7. Our motivation for creating this new version was driven by the following five factors: (i) our desire to unify the SDA and SDAMM codes, their associated preparation and analysis tools, and derived codes that model surface interactions, into a single maintainable framework; (ii) to improve the parallel performance of the software to better utilize modern parallel computer architectures; (iii) to introduce dynamic memory management to allow larger systems to be studied without the need for recompilation of the code; (iv) to allow solute flexibility to be treated in the simulated models; and (v) to improve the modularity and readability of the code to allow for easier maintenance and extension, both by developers and end‐users. In the sections that follow, we describe the theory of Brownian dynamics simulation of biomacromolecules and report the new features that have been developed as part of SDA 7, including a new algorithm which allows for solutes to be described by more than one rigid structure, thus incorporating conformational flexibility, or changes in protonation states, into the simulations. We then describe the new object‐oriented structure of the code, including a discussion of the reusability this adds to the framework, and describe the parallelization scheme employed. Finally, we give a set of simulation benchmarks and use cases.

Brownian Dynamics Simulation of Macromolecular Diffusion

Modeling the interactions of biomacromolecules in solution is a highly complex problem with many degrees of freedom. One method to reduce the dimensionality of the problem is by using implicit solvent models in which the solvent is treated as a dielectric continuum.20 While continuum‐based models provide a description of the energetics of solvation, the dynamic effects of solvent friction may be lost, leading to an incorrect description of time‐dependent properties. In a Brownian dynamics simulation, these frictional forces are introduced in a stochastic manner that represents the buffeting of solutes by the thermal motion of solvent atoms. Ermak and McCammon21 introduced an algorithm to describe the propagation of a system of N Brownian particles, in which the displacement of particle i during a time step is given by where k B and T are the Boltzmann constant and simulation temperature, respectively, is the 3 × 3 hydrodynamic diffusion tensor22, 23, 24 between particles i and j, is the systematic force acting on particle i, and is a vector describing a random displacement of i, sampled at each simulation step from a Gaussian distribution of mean zero that satisfies the variance–covariance relation for all i and j. The vector is obtained by Cholesky decomposition of the N × N total diffusional tensor of the system, whose elements are the pairwise tensors, . This decomposition scales as O(N 3), and is the most computationally demanding step of such a Brownian dynamics scheme. In SDA 7, we model solutes using all‐atom rigid conformations, which allows the use of precomputed interaction grids to speed up the calculation of the systematic forces, (See Appendix A). This also removes the need to model intramolecular hydrodynamic interactions, so in dilute solutions, hydrodynamic interactions can be assumed to be negligible, meaning the off‐diagonal tensors in can be taken as zero, while diagonal tensors for i = j can be replaced by scalar quantities. Equation (1) can therefore be simplified to where is the infinite dilution translational diffusion coefficient of solute i, and is a random vector of mean zero and variance . An analogous equation is used to describe the rotation of the rigid solute i, in which the translational diffusion coefficient is replaced by the rotational coefficient and the forces are replaced by torques.6 Two simulation schemes are used in SDA 7, one that models the bimolecular association of a pair of solutes,6 and one which models the dynamics of many solute molecules in solution.11 In the first scheme, one of the solutes remains fixed at a position in the centre of the simulated volume, but is allowed to rotate, while the diffusion of the other is simulated. To account for the diffusion of the fixed solute, a combined translational diffusion coefficient is used for the mobile solute, given by the sum of the coefficients of the two solutes. This approach is used to calculate diffusional association rate constants,6, 7, 25 using the Northrup–Allison–McCammon method,26 electron transfer rates,9 and for docking to obtain the structures of encounter complexes.8 In the second scheme, the dynamics of many solutes are modeled, in a simulated volume that may be either periodic or nonperiodic, allowing the effects of solute concentrations to be included. As the total concentration of the solutes increases, the assumption that hydrodynamic interactions can be taken as negligible becomes less valid. To include some of the effects of hydrodynamic interactions, while avoiding the computational expense of the full hydrodynamic diffusion tensor approach described in eq. (1), Mereghetti et al.12 introduced a mean‐field approximation to hydrodynamic interactions, which is included in SDA 7. Following a method first described by Heyes,27 the diffusion coefficient of solute i is scaled at each simulation time step according to the fractional occupancy of the local volume surrounding i by other solutes. In calculating the fractional occupancy of local volumes, all solutes are assumed to be spherical, with the radii of their volumes given by their Stokes hydrodynamic radii.

New Features in SDA 7

Flexible solute representations

In previous versions of SDA, each solute has been represented by a single rigid conformation. Although a rigid approximation is suitable in many cases, it is known that docking poses can be very sensitive to the initial structures of the interacting solute partners, as solutes may undergo conformational changes during the binding process. Similarly, association rates may be affected by conformational selection or adaptation which is not accounted for in a rigid model. In SDA 7, solutes can be represented by a set of rigid conformations. If available, these may be a set of structures determined by crystallography or NMR. Alternatively, they may be generated by modeling or simulation. For example, a normal mode analysis may be performed to predict the biologically relevant low frequency motions of a protein, from which a set of representative conformations may be generated. A set of different conformations of a given solute may also be used to represent different protonation states of titratable groups at one or several pH values.

Potential of mean force calculations

SDA 7 is able to perform potential of mean force (PMF) calculations of the molecular adsorption of solutes to a planar surface. The implementation of PMF is based on a thermodynamic integration method described by Kokh et al.10 Given a rigid body structure of a molecule, this method describes the free energy of adsorption as a six dimensional function of energy (three of which define the rotational degrees of freedom and the remaining three the translational degrees of freedom). This configurational space is sampled using a set of rotation angles and positions over an integration interval with predefined subintervals of equal width for each of the six degrees of freedom. As the calculation of a complete six dimensional free energy landscape is usually not feasible, exploration of the configurational space can be restricted to a small area over the surface, if it retains a periodic unit of structure. The reaction coordinate in the PMF calculations is defined as a line perpendicular to the surface.

Updated preparation and analysis tools

The set of tools available in SDA 7 has also been extended and unified for both bimolecular and many‐molecule simulations. These include preparation tools for creating the 3‐dimensional interaction potential grids, for calculating effective charges28 used to represent the solutes, and for generating initial configurations with many solutes at a given concentration. The analysis tools include tools for postanalysis of trajectories, for example to calculate translational and rotational self‐diffusion coefficients, and conversion tools to convert SDA 7 trajectory files and structure files to PDB or DCD formats, or between UHBD and DX grid file formats.

Details of the SDA 7 Implementation

SDA 7 has been rewritten in Fortran 90 (F90) to allow the use of language features that were not available in the previous Fortran 77 (F77) versions of SDA. Dynamic memory allocation allows for more efficient memory use, and for larger systems to be simulated, limited only by the available system memory. F90 modules provide a modular framework to the code, allowing library functions to be reused in various parts of the main simulation code and that of the related preparation and analysis tools. The use of modules and pointers also allows related data structures to be grouped together producing an object‐oriented structure. Unlike in previous versions of SDA, bimolecular simulations may now be run in parallel, through the use of OpenMP.

Solute representation

Figure 1 shows the relationships between the module classes used to represent a set of solutes. During SDA 7 simulations, the precomputed interaction grids constitute the largest memory requirement of the calculation. In the case of a simulated system containing many identical solute molecules, it is desirable to store only one copy of their identical grids. Furthermore, when a solute is represented by more than one rigid conformation, the grids corresponding to the different conformations of the same solute should be grouped together in a single class.
Figure 1

Simplified class diagram of the solute‐grid relationship (and the memory storage of simulation data). An array contains pointers to each instance of the solute class for each solute. A single setofgrid instance is used for all identical conformations of the solutes to minimize memory requirements. In the case of flexible solutes, an intermediate flexible object containing a linked‐list of pointers to the setofgrid instances of each conformation is added to each flexible solute.

Simplified class diagram of the solute‐grid relationship (and the memory storage of simulation data). An array contains pointers to each instance of the solute class for each solute. A single setofgrid instance is used for all identical conformations of the solutes to minimize memory requirements. In the case of flexible solutes, an intermediate flexible object containing a linked‐list of pointers to the setofgrid instances of each conformation is added to each flexible solute. One conformation of a solute is described by the setofgrid class. It comprises the 3D precomputed interaction potential grids (yellow in the figure) as well as the positions and values of their interaction sites, such as effective charges or atomic surface accessibilities, within the 3D solute structure. Additional information is also stored here, such as reaction criteria to define encounter complexes, the solute infinite dilution diffusion coefficients used for BD propagation or the Stokes hydrodynamic radii used in the mean‐field hydrodynamic approximation. Each solute object only needs to store the instantaneous position and orientation of the solute and a pointer to its associated setofgrid instance. When a solute is defined by more than one conformation, an intermediate flexible object (shown in orange) is associated to each flexible solute. It consists of a linked list of setofgrid instances, each corresponding to a specific conformation. Upon a change of conformation, the pointer to the current conformation is updated to the correct setofgrid instance. Finally, pointers to all solute instances are stored in an array (shown in red). This design provides a simple and memory efficient way to simulate many different solute types, each of which may be represented by a set of conformations.

Algorithm for treating solute flexibility

When simulating solutes defined by multiple coordinate sets, these coordinates are stored in a linked list. After the update of the solutes' positions and orientations during the BD propagation step, trial moves to alternative coordinate sets may be attempted. The time interval between trial moves can be set to either be fixed period, or can be drawn from a normal distribution, with the mean and standard deviation of this distribution set by the user in the input file. A parameter defined in the input file, nearest, can limit a trial move to an adjacent conformation in the list, or specify that trial moves may be attempted to a randomly chosen conformation. The use of adjacent conformations may be preferable when the conformations are generated by normal mode analysis, but may introduce a bias in the sampling of the first and last conformation in the list. When the conformations are obtained from NMR or crystal structures, a random change of conformation may be preferable. SDA 7 includes three acceptance criteria for deciding whether a trial move should be accepted: (i) to always accept the move, (ii) to accept all moves that lower the interaction energy of the solute, and (iii) a Metropolis algorithm29 in which conformational changes that lower the interaction energy of that solute are always accepted, while those that increase the interaction energy are accepted with a probability , where is the interaction energy of the solute with its surroundings in the old conformation and is the interaction energy of the new conformation.

SDA 7 workflow and parallelization

The workflow of an SDA 7 simulation is shown in Figure 2. The initialisation stage is common to all simulation types, with simulation parameters defined in a single control input file. This file also defines the location of any additional files required for the calculation, such as structure files and files describing the precomputed solute interactions.
Figure 2

Workflow of an SDA 7 simulation. The simulation parameters are set in a single control input file, which also defines the location of the additional files needed for the simulation. The simulation proceeds through one of two routes, depending on whether it is a bimolecular or many‐molecule simulation. With bimolecular simulations looping over many trajectories with different initial configurations, and many‐molecule simulations consisting of a single trajectory of nsteps BD moves. In both simulation types, there are three main calculations that may be computed within a simulation time step: a force calculation (F), a BD propagation move (BD), and an energy evaluation (E). The simulation types are parallelized differently (shown here as an example with four OpenMP threads), with each thread in a bimolecular simulation running an independent trajectory and the force, BD and Energy calculations parallelized across threads in many‐molecule simulations. In many‐molecule simulations, all threads must synchronize between these three calculations. Both types of simulation produce output files that are consistent with each other. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]

Workflow of an SDA 7 simulation. The simulation parameters are set in a single control input file, which also defines the location of the additional files needed for the simulation. The simulation proceeds through one of two routes, depending on whether it is a bimolecular or many‐molecule simulation. With bimolecular simulations looping over many trajectories with different initial configurations, and many‐molecule simulations consisting of a single trajectory of nsteps BD moves. In both simulation types, there are three main calculations that may be computed within a simulation time step: a force calculation (F), a BD propagation move (BD), and an energy evaluation (E). The simulation types are parallelized differently (shown here as an example with four OpenMP threads), with each thread in a bimolecular simulation running an independent trajectory and the force, BD and Energy calculations parallelized across threads in many‐molecule simulations. In many‐molecule simulations, all threads must synchronize between these three calculations. Both types of simulation produce output files that are consistent with each other. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.] The main calculation loop is processed differently for bimolecular and many‐molecule simulations, mainly because a different OpenMP parallelization scheme is required for these different simulation types. During bimolecular simulations, SDA can compute association or electron transfer rate constants and predict the structures of the most energetically favorable docked encounter complexes. During many‐molecule simulations, the radial distribution functions can be computed on the fly, while other properties, like translational and rotational self‐diffusion coefficients or transient oligomerizations, are computed with separate tools in a postprocessing stage. SDA 7 has been developed to exploit multicore shared‐memory architectures with the use of OpenMP. During bimolecular simulations, thousands of trajectories are typically generated to accumulate the required statistical data. In this case, a simple parallelization scheme is possible, with each thread computing a single trajectory at a time. Each thread needs to have its own private copy of the solutes, with independent positions and orientations, but they can share the same grid data, reducing the overall memory cost compared to either running the trajectories as independent jobs, or using a Memory Passing Interface parallelization scheme. There are some scaling bottlenecks due to accessing shared arrays during the simulations. For example, during docking simulations, each thread owns a private list of high scoring complexes that have been computed. At regular intervals, these lists are merged into a shared one. An additional factor that affects scaling is what we call the “end‐effect.” At the end of a bimolecular simulation, only one thread is executing a trajectory, the others have no remaining trajectories to run, and remain idle. In practice, the end‐effect is usually small, as shown in the benchmarks described in the results section. In many‐molecule simulations, single trajectories are generated. The calculation of the forces on each solute represents the most computationally expensive step (typically more than 95% of the CPU time). The algorithm, which formally scales as (or O(N) with the use of cutoffs) with N being the number of solutes, is a double loop over pairs of solutes. OpenMP is used for distributing the first loop across the number of available cores. The calculation of energies and the trajectory propagation is parallelized similarly. Simulations can be performed within a spherical volume or a rectangular box, with or without periodic boundary conditions. An atomically detailed planar surface, lying in the xy‐plane may be defined in simulations.

Application examples and benchmarks

In this section, we discuss simulations of three systems that highlight different functionalities available in SDA 7 (Fig. 3). First, we consider the association of the two proteins, barnase and barstar (Fig. 3a). Barnase is an extracellular ribonuclease and barstar is its intracellular inhibitor. They are known to have a very strong binding affinity, with a fast bimolecular association rate constant of 107−109 M−1s−1,30, 31 depending on experimental conditions. Barnase and barstar already served as a model protein pair in the development of SDA6, 7, 8, 25 because the wild‐type proteins and a number of mutants have been extensively characterized biochemically and structurally. We use the barnase–barstar system to investigate the parallel scaling of SDA 7, during both docking and association rate calculations. We also compare the single thread speed with that obtained using SDA 6.
Figure 3

Test systems used for the validation and benchmarking of SDA 7. a) The electrostatically guided bimolecular association of barnase (cyan) and barstar (green). The + 0.5 kcal/(mol ) (blue) and –0.5 kcal/(mol ) (red) isosurfaces of the electrostatic potentials surrounding each solute are shown. b) A snapshot of the simulation of 256 HEWL molecules in solution (The HEWL molecules are colored differently for clarity). c) Docking of the globular domain of the linker histone H5 (blue) to the nucleosome. The crystal conformation (magenta, 70) and a structure generated with an elastic network model (cyan, 76) of the flexible region of the nucleosome are shown.

Test systems used for the validation and benchmarking of SDA 7. a) The electrostatically guided bimolecular association of barnase (cyan) and barstar (green). The + 0.5 kcal/(mol ) (blue) and –0.5 kcal/(mol ) (red) isosurfaces of the electrostatic potentials surrounding each solute are shown. b) A snapshot of the simulation of 256 HEWL molecules in solution (The HEWL molecules are colored differently for clarity). c) Docking of the globular domain of the linker histone H5 (blue) to the nucleosome. The crystal conformation (magenta, 70) and a structure generated with an elastic network model (cyan, 76) of the flexible region of the nucleosome are shown. The second system is a set of Hen Egg White Lysozyme (HEWL) protein molecules whose diffusion is simulated using the many‐molecule simulation functionality of SDA 7 (Fig. 3b). Mereghetti et al.11 previously reported the use of SDAMM to study the dynamics of HEWL in solution at different protein concentrations. They compared osmotic second virial coefficient B 22 values at three different pH values (3, 6, and 9) and at different ionic strengths, and the self‐diffusional coefficients at pH 4.6 and at varying protein concentrations. This system was also used to validate the implementation of the Debye‐Hückel electrostatic correction used in both SDAMM and SDA 7.32 We use this example system to assess the scaling of many‐molecule simulations in SDA 7, and examine the effect that including a flexible solute representation, by allowing the HEWL molecules to switch between their protonation states at pH 3, 6, and 9, have on the scaling of the method. While modeling transitions between pH‐dependent charge states is somewhat artificial, we use the results of these simulations to discuss the potential future applications of this novel method. Finally, we consider the docking of the globular domain of the linker histone H5 to a flexible nucleosome (Fig. 3c), which was previously studied by Pachov et al.33 This application case shows the use of SDA 7 for a large system, with one solute treated flexibly. Pachov et al.33 generated a series of conformations from a normal mode analysis using an elastic network model of the nucleosome, and independent BD simulations were performed for each conformation. With the flexibility module of SDA 7, these simulations can be performed in a more efficient way. Treating solute flexibility in a single simulation allows for only those complexes with the most favorable binding energies to be recorded, whichever conformation of the flexible solute they include. Furthermore, with a suitable conformational sampling algorithm, sampling can be biased towards those conformations that produce encounter complexes with more favourable binding free energies, reducing the total simulation time required.

Methods

SDA 7 was compiled using gfortran 4.8.1. The barnase–barstar and HEWL benchmark simulations were run on a compute cluster consisting of nodes containing four AMD Opteron 6174 12‐core 2.2 GHz processors. For comparison, SDA 6 was compiled with both gfortran 4.8.1 and ifort 11.1, and simulations were performed on the same compute cluster. The docking simulation of the linker histone H5 to the nucleosome was performed on a cluster node containing two Intel Xeon E5‐2630 6‐core 2.3 GHz processors.

Case I: Bimolecular simulation of barnase–barstar association

The coordinates of barnase and barstar were taken from a crystal structure of their complex (PDB code: 1BRS, 2.0 Å resolution). Protonation states for each protein were assigned at pH 7 using PDB2PQR34, 35 resulting in net charges of + 2 e for barstar and –6 e for barnase. Atomic partial charges and radii were taken from the Amber force field.36 Electrostatic potential grids of size 1293 Å3 and spacing 1 Å were generated with APBS37 by solution of the linearized Poisson–Boltzmann equation with an ionic strength of 50 mM, ion radius of 1.5 Å, a protein interior dielectric constant of 4.0, a solvent dielectric constant of 78.0 and a temperature of 298.15 K. Effective charges28 on each protein were calculated using the ECM module, and electrostatic and hydrophobic desolvation grids of size 1103 Å3 and spacing 1 Å were created with the make_edhdlj_grid module of SDA 7. Bimolecular association rate and docking simulations were performed using SDA 7 with processor core counts ranging from 1 to 48. For each simulation type and core count, the simulations were repeated 10 times using different random number generator seeds. In each simulation, 20,000 independent trajectories were created. Ten simulations of 20000 trajectories were also performed with SDA 6 (compiled with both gfortran and ifort), and compared to the single processor simulations using SDA 7. In all simulations, the initial position and orientation of barstar was randomly chosen to lie on a sphere of radius 150 Å, centred on the geometric midpoint of barnase, and all simulations were terminated when the interprotein centre‐to‐centre separation reached 300 Å. During the simulations for the calculation of the bimolecular association rate constant, long‐range electrostatic and short‐range electrostatic desolvation interactions (see Appendix A) were acting between the proteins. The association rate constant k on was calculated using the Northrup‐Allison‐McCammon method,26 with an encounter complex defined when two independent native contacts were sampled at a distance of 6 Å, as used previously by Gabdoulline and Wade.25 Pairs of native contacts were considered as being independent if the interacting residues of the same protein were separated by more than 5 Å. To better describe the binding energetics of close protein contacts, in addition to the long‐range electrostatic and short‐range electrostatic desolvation interactions used in the rate calculations, short‐range hydrophobic desolvation interactions were also modeled during the docking simulations, during both the BD search and in scoring the docked complexes. Five thousand low energy docked encounter complex structures were recorded during the simulation, and subsequently clustered using an average‐linkage hierarchical clustering method developed by Motiejunas et al.8 Encounter complex solutions that lay within 1 Å RMSD of a previously recorded lower energy solution were not recorded, but were counted to allow the sampling of cluster populations to be determined correctly. Cluster representatives were compared to the crystallographic structure of the complex (PDB code 1BRS), by aligning the backbone atoms in barnase and calculating the RMSD of barstar backbone atoms. Some docked cluster representatives were further refined by low temperature MD using the Amber ff99SB force field of Hornak et al.38 and the GBNeck implicit solvent model of Mongan et al.39 The structures of the docked encounter complexes were first energy minimized, before MD refinement. Initial velocities were drawn from a Maxwell‐Boltzmann distribution at 30 K, then heated to 200 K over 50 ps, simulated at 200 K for a further 50 ps, then cooled back to 30 K over 200 ps. The final complex structures obtained were then compared to the known native docked complex (PDB: 1BRS) and their binding affinities estimated using the same GB model used during the simulations.

Case II: Many‐molecule simulations of hen egg white lysozyme

A crystal structure of HEWL (PDB code: 1HEL, 1.7 Å resolution) was used to assess the performance of many‐molecule simulations in SDA 7. The protonation states of HEWL at pH 3, 6, and 9 were assigned from experimental pK a values, as described by Mereghetti et al.,11 giving net charges of + 13 e, + 8 e, and + 7 e, respectively. Atomic partial charges and radii were assigned from the PARSE force field.40 Long‐range electrostatic potential grids of size 973 Å3 with a 1 Å resolution were created by solving the linearized Poisson–Boltzmann equation with APBS,37 using an ionic strength of 5 mM, an ion radius of 1.5 Å, a protein dielectric constant of 2.0, a solvent dielectric of 78.0 and a temperature of 298.15 K. Effective charges were calculated using the ECM module of SDA 7, while short‐range electrostatic and hydrophobic desolvation, and soft‐core repulsive grids, of size 973 Å3 and 1 Å spacing, were created using the make_edhdlj_grid module. An initial configuration of 256 HEWL molecules was created using the genbox tool in SDA 7 at a protein concentration of 150 mg/ml. Two sets of simulations were performed to investigate the scaling performance of the SDA 7 code. In the first, all 256 HEWL molecules were simulated in their pH 6 protonation state. In the second, each HEWL molecule was randomly assigned to a pH 3, 6, or 9 protonation state at the beginning of the simulations, and transitions between states were performed at regular intervals during the simulations. Two algorithms for changing states were used, one where trial changes in the state of a protein were accepted if they resulted in a lowering of the interaction energy of that protein with its surroundings, and another where trial moves were accepted according to the Metropolis criterion.29 All simulations were performed using processor core counts ranging from 1 to 48, with each simulation repeated 10 times with different initial random seeds. All simulations were performed for 10 ns using a 0.25 ps time step, with hydrodynamic interactions accounted for using a mean‐field approach12 and longer range electrostatic interactions modeled using a Debye–Hückel correction term.32 Two longer 500 ns BD simulations were also performed during which each HEWL molecule was intially assigned to a randomly chosen charge state. These states were allowed to change during the simulations according to either the minimum energy or Metropolis criteria. The fractional occupancies of each charge state were then examined during the simulation trajectories.

Case III: Docking of the linker histone H5 to the nucleosome

The initial conformation of the nucleosome was generated from a crystallographic structure of the core nucleosome particle (PDB code: 1KX5, 1.9 Å resolution), with the linkers extended by 20 base pairs using a lower resolution structure (PDB code: 1ZBB, 9 Å resolution). Normal‐mode analysis with an elastic network model was performed using the Nomad‐Ref webserver,41 to generate a flexible representation of the linker regions, as described by Pachov et al.33 Seven structures were selected to represent the lowest frequency vibrational mode (70 to 76 in Ref33). The structure of the globular domain of the linker histone H5 (gH5) was taken from the crystal structure (chain B, PDB code: 1HST, 2.5 Å resolution). Only long‐range electrostatic interactions were taken into account in the simulation. Titratable residues were protonated using PDB2PQR34, 35 with the Amber force field,36 resulting in net charges of –237 e for the nucleosome and + 11 e for gH5. The electrostatic potential grids were calculated with APBS,37 by solving the nonlinearized Poisson–Boltzmann equation at an ionic strength of 0.1 M, with an ion radius of 1.5 Å, a solute dielectric constant of 2.0, a solvent dielectric of 78.54 and a temperature of 298.15 K. Grid sizes were 2573 Å3 for the nucleosome and 1293 Å3 for gH5, with a 1.0 Å resolution. Effective charges were calculated using ECM. The docking simulation consisted of 25,000 trajectories with a maximum simulation time of 100 ns for each trajectory. The initial centre‐to‐centre distance between the nucleosome and gH5 was set to 280 Å and the trajectories were stopped when the distance between the centres of the solutes reached 500 Å. Docked complexes were recorded when the centre‐to‐centre distance was within 73 Å and the centre of gH5 was within 40 Å of the nucleosome dyad point, as defined by Pachov et al.33 Trial moves to adjacent nucleosome conformations, along the lowest frequency normal mode, were attempted at a fixed interval of 1.0 ns, and accepted according to the Metropolis algorithm. The 20,000 lowest energy docked complexes were recorded for subsequent clustering. Complexes obtained using each nucleosome conformation were clustered separately using the clustering algorithm described by Motiejunas et al.8 Docked structures were only recorded if they had a RMSD larger than 1 Å, compared to a previously recorded lower energy solution, but all complexes satisfying the above distance criteria were counted to provide correct cluster populations.

Results

Calculations of the bimolecular association rate constant were performed for barnase and barstar (Table 1, Fig. 4). The association rate constants calculated ranged from 2.7 to M−1s−1 (Table 1), in good agreement with experimental values, and values computed with previous versions of SDA. The SDA 7 code scales well up to 48 cores, as used in the benchmark, with a scaling factor of 39.7 and a scaling efficiency of . The single core performance is significantly improved in SDA 7 with an average execution time of about 5000 s, compared to approximately 10,000 s and 80,00 s, for SDA 6 compiled with gfortran 4.8.1 and ifort 11.1, respectively.
Table 1

Scaling of the Case I (barnase–barstar) benchmarks on different numbers of cores.

Number of coresAssociationDocking
Timea Scalingb k on c Timea Scalingb RMSDd
15128 ± 1261.00 ± 0.022.7 ± 0.231461 ± 5031.00 ± 0.024.9 ± 0.4
22705 ± 1541.90 ± 0.112.7 ± 0.115886 ± 2501.98 ± 0.034.9 ± 0.4
41307 ± 163.92 ± 0.052.8 ± 0.37933 ± 1153.97 ± 0.064.8 ± 0.4
8663 ± 97.73 ± 0.102.7 ± 0.24047 ± 367.77 ± 0.074.9 ± 0.4
12446 ± 611.50 ± 0.162.8 ± 0.32732 ± 4811.51 ± 0.214.9 ± 0.4
16341 ± 715.04 ± 0.302.8 ± 0.32085 ± 4015.09 ± 0.294.8 ± 0.4
24235 ± 421.83 ± 0.352.8 ± 0.21446 ± 2721.75 ± 0.414.9 ± 0.4
32180 ± 328.43 ± 0.522.8 ± 0.21130 ± 2027.82 ± 0.514.8 ± 0.2
48129 ± 839.69 ± 2.432.9 ± 0.3899 ± 15734.97 ± 6.104.9 ± 0.3

Mean simulation wall‐time (s) with standard deviation.

Mean simulation performance scaling, relative to mean single core time, with standard deviation.

k on (108 M−1s−1) calculated with an encounter complex definition of 2 independent native contacts within 6 Å, (see methods).

Root mean squared deviation (Å) of representative of second most populated cluster of barstar to barstar in the complex in the PDB file, 1BRS.

Figure 4

Scaling performance for simulations of barnase and barstar to compute a) the bimolecular association rate constant and b) the structures of docked complexes. Error bars show standard deviations across 10 independent simulations performed for each processor count. Dotted lines indicate perfect scaling.

Scaling performance for simulations of barnase and barstar to compute a) the bimolecular association rate constant and b) the structures of docked complexes. Error bars show standard deviations across 10 independent simulations performed for each processor count. Dotted lines indicate perfect scaling. Scaling of the Case I (barnase–barstar) benchmarks on different numbers of cores. Mean simulation wall‐time (s) with standard deviation. Mean simulation performance scaling, relative to mean single core time, with standard deviation. k on (108 M−1s−1) calculated with an encounter complex definition of 2 independent native contacts within 6 Å, (see methods). Root mean squared deviation (Å) of representative of second most populated cluster of barstar to barstar in the complex in the PDB file, 1BRS. In separate simulations, the diffusional encounter complex structure was predicted, using SDA 7 to dock barstar to barnase (Table 1). The 1000 most energetically favorable complexes, with atomic RMSDs of greater than 1 Å from each other, from the 20,000 trajectories of each simulation were clustered into five clusters. The second most populated cluster, with a population of approximately of complexes sampled during the simulation, agreed closely with the crystal structure, with a backbone RMSD of the cluster representative of barstar of 4.8–4.9 Å. This cluster was also ranked second energetically. In the most populated cluster, the barnase‐barstar interface was rotated approximately 90 °, relative to the crystal structure. The representative clusters from the second most populated cluster of each of the 90 SDA 7 simulations (simulations with nine different numbers of processor cores, repeated 10 times each) were relaxed with low temperature MD. The RMSD of the interfacial residues of barstar, relative to barnase, was monitored during the simulations by considering all barstar heavy atoms within 6 Å of barnase in the crystal structure. Prior to simulation, the RMSD of these atoms in the 90 cluster representatives, relative to the crystal structure, was between 3.8 and 4.3 Å. While some complexes relaxed to structures further from the native complex, 71 relaxed to conformations in which the RMSD of barstar's interfacial heavy atoms was below 3.5 Å, relative to the crystal structure, with the lowest having an RMSD of 1.9 Å. The structure with the most favorable binding affinity, as calculated from the final complex structure using the same GB parameters as were used in the simulation, had an interfacial heavy atom RMSD of 2.6 Å. During this docking benchmark, SDA 7 scales up to 48 processor cores with a scaling factor relative to one core of 35.0, a 73% scaling efficiency. The single core performance is somewhat improved in SDA 7 with an average execution time of about 31,000 s, compared to approximately 40,000 s and 32,000 s, for SDA 6 compiled with gfortran 4.8.1 and ifort 11.1, respectively. Simulations of 256 HEWL molecules were performed first with protonation states fixed to those assigned at pH 6, and then with each molecule able to switch between protonation states assigned at pH 3, 6, and 9, at intervals during the simulation. The simulation wall time for processor core counts ranging from 1 to 48 was obtained and the scaling factor calculated from the mean time required to run the simulation on one core (Table 2, Fig. 5). For each of the three simulation types, we see reasonable scaling up to 32 cores, but little or no improvement in performance when increasing the core count to 48. The simulation times on a given number of processors were very similar for the three simulation types.
Table 2

Scaling of the Case II (HEWL) benchmarks on different numbers of cores.

Number of coresSingle protonation stateThree protonation states
Minimum energy algorithmMetropolis algorithm
Timea Scalingb Timea Scalingb Timea Scalingb
16426 ± 831.00 ± 0.016480 ± 2171.00 ± 0.036431 ± 1261.00 ± 0.02
23273 ± 301.96 ± 0.023337 ± 531.94 ± 0.033300 ± 441.95 ± 0.03
41673 ± 183.84 ± 0.041695 ± 183.82 ± 0.041675 ± 153.84 ± 0.03
8872 ± 87.37 ± 0.07878 ± 37.38 ± 0.03866 ± 77.43 ± 0.06
12599 ± 910.74 ± 0.16611 ± 510.61 ± 0.09598 ± 710.75 ± 0.12
16463 ± 613.89 ± 0.18475 ± 413.65 ± 0.12468 ± 613.76 ± 0.17
24346 ± 1218.56 ± 0.67354 ± 518.31 ± 0.25350 ± 618.38 ± 0.32
32288 ± 1022.30 ± 0.78308 ± 921.06 ± 0.62298 ± 1321.55 ± 0.93
48266 ± 2424.13 ± 2.16318 ± 2220.40 ± 1.43302 ± 3021.26 ± 2.11

Mean simulation wall‐time (s) with standard deviation.

Mean simulation performance scaling relative to mean single core time with standard deviation.

Figure 5

Scaling performance for 10 ns simulations of 256 HEWL molecules a) with residue protonation states calculated at pH 6, and with protonation states at pH 3, 6 and 9 with conformational switching using the b) minimum energy, and c) Metropolis algorithms. Error bars show standard deviations across ten independent simulations performed for each processor count. Dotted lines indicate perfect scaling.

Scaling performance for 10 ns simulations of 256 HEWL molecules a) with residue protonation states calculated at pH 6, and with protonation states at pH 3, 6 and 9 with conformational switching using the b) minimum energy, and c) Metropolis algorithms. Error bars show standard deviations across ten independent simulations performed for each processor count. Dotted lines indicate perfect scaling. Scaling of the Case II (HEWL) benchmarks on different numbers of cores. Mean simulation wall‐time (s) with standard deviation. Mean simulation performance scaling relative to mean single core time with standard deviation. During the longer 500 ns simulations, the HEWL charge states rapidly converged to the pH 9 charge states, that is, the state in which HEWL has the lowest overall charge. During the simulation in which trial transitions were accepted according to the minimum energy criterion, after 10 ns all HEWL molecules were in this state, and remained so for the rest of the simulation (Supporting Information Fig. S1). During the simulations that used the Metropolis acceptance criterion, the charge states converged more slowly. After approximately 75 ns, all HEWL molecules were in the pH 9 states, with only occasional sampling of the other two states in the latter parts of the simulation (Supporting Information Fig. S2).

Case III: Docking of linker histone H5 to the nucleosome

The docking of gH5 to the nucleosome took approximately 3.5 h to complete on 12 processor cores. For each nucleosome conformation, the docked complexes were clustered into 10 clusters. Table 3 shows the number of docked complexes obtained for each nucleosome conformation, the population of the first docked cluster, according to the total simulation sampling, and the backbone RMSD of gH5 in the representative of the first cluster and in the docked representative structure obtained by Pachov et al.33
Table 3

Clustering of Case III (linker histone H5‐nucleosome) docking.

ConformationNumber of solutionsFirst cluster population (%)RMSDa
70 248,678823.4
71 176,171845.4
72 43,138858.0
73 45,952837.5
74 22,960377.6
75 112,010715.5
76 90,2145919.5

Backbone RMSD (Å) of gH5 in the representative of the first cluster and the structure obtained by Pachov et al.

Clustering of Case III (linker histone H5‐nucleosome) docking. Backbone RMSD (Å) of gH5 in the representative of the first cluster and the structure obtained by Pachov et al. The majority of the high scoring complexes were obtained with nucleosome conformations 70 (corresponding to the crystal structure) and 71. In both cases, clustering the complex docking solutions obtained with these nucleosome conformations resulted in the most populated clusters containing approximately 80% of the sampled complexes. The lower number of docked complexes for the other conformations of the nucleosome suggests they formed weaker interactions. The representative of the first cluster obtained with conformation 70 is in reasonable agreement with the reference structure published by Pachov et al.,33 showing a backbone RMSD of 3.4 Å. An exact agreement cannot be expected due to differences in the clustering procedures and the number of trajectories sampled. Other conformations show representative structures somewhat shifted from the reference position, with the last 76 conformation giving a first docked cluster far away from the reference binding site. Pachov et al. found a similar distribution of RMSD values over the conformations of this nucleosome mode with a very different binding location for the linker histone to conformation 76 compared to all the other conformations for this mode (see Fig. 2a in Pachov et al.33)

Discussion

SDA 7 has been completely rewritten in order the combine the existing SDA and SDAMM codes into a single framework. A new modular and object‐oriented approach has been used to allow for easier integration of new methods. The new code uses dynamic memory allocation and improved parallelization to allow increasingly large systems to be investigated, up to the limits imposed by the operating environment. The results of the application cases presented are consistent with previously published work, both computational and experimental, and show no variation with respect to the number of processor cores used for the calculations. The bimolecular association rate constant for the diffusional association of barnase and barstar was calculated to be 2.7– M−1s−1, in close agreement with the experimentally derived value of M−1s−1, obtained at pH 8 and 50 mM NaCl.31 This value compares favorably to the M−1s−1 reported by Gabdoulline and Wade in a study that used a previous version of SDA to model identical experimental conditions.25 The difference is likely due to sampling differences and the use of different force fields for the initial atomic partial charge and radius assignment, as the underlying method remains the same. Brownian dynamics docking using SDA 7 was able to predict the native structure of the barnase‐barstar complex to within 4.8–4.9 Å of the known crystal structure, within the second most populated cluster (Table 1). This is consistent with the results of Motiejunas et al.8 who previously used SDA to dock barstar to barnase and also found that the native structure agreed more closely with their second most populated cluster. It should be noted that SDA 7 is designed to predict diffusional encounter complexes rather than fully docked complexes. The exclusion grids used in SDA 7 to prevent two molecules from overlapping each other often result in predicted complexes in which the two interacting solutes have slightly increased separations. To assess the docked solutions obtained with SDA, short low‐temperature MD simulations were performed to allow the docked complexes to relax to more stable positions. Following these simulations, the lowest RMSD of barstar's interfacial heavy atoms dropped from 3.8 to 4.3 Å, relative to the native crystallographic structure, in the initial conformations to 1.9 Å, in the lowest case, and 2.6 Å, in the lowest energy case, as estimated from single‐point GB calculations. The bimolecular BD simulations described here are trivially parallelizable as the large number of trajectories performed can be split between threads easily. This is reflected in the scaling shown in the benchmark simulations. There are, however, some factors that limit scaling. As the simulations are of indeterminate length, it is possible that a long lasting trajectory can result in a single processor core continuing to complete a trajectory while the others remain idle, having completed all other required trajectories. Also, in the case of docking simulations, we keep a global list of the highest scoring docked complexes sampled across threads. Synchronisation of this list requires occasional waiting times for some threads. The effects of this are shown in the barnase‐barstar simulations reported here, where the 48 core docking benchmark shows only a 35‐fold reduction in execution time, compared to a single core simulation, while the association benchmark shows a 40‐fold reduction. Moreover, the new parallelization of SDA 7 does not adversely affect single core performance, indeed, both association and docking simulations in fact complete faster than equivalent simulations with SDA 6. The parallelization of many‐molecule simulations is more complex, with the force, trajectory propagation, and energy calculation routines requiring that the previous routine has completed for all threads before execution. Each of the 256 HEWL molecule benchmarks reported here show that we obtain reasonable parallel scaling up to approximately 24 cores (Table 2, Fig. 5), with the three benchmarks showing 18‐ to 19‐fold reductions in execution time. In comparison, the 48 core benchmark showed only a 20‐ to 24‐fold reduction. The change of conformation algorithms had no significant effect on scaling, with the scaling penalty appearing to be slightly smaller for the Metropolis algorithm compared to the minimum energy algorithm. It was expected that the scaling of many‐molecule simulations would be dependent on both the number of proteins within the simulation box and the density of the box. To test this, we ran simulations of 256, 2500, and 5000 HEWL molecules at pH 6 and a concentration of 15.0 mg/ml, on 1, 24, and 48 cores. The simulations were otherwise performed as described previously. For 256 HEWL molecules, we observed 12‐fold and 10‐fold reductions in execution time for the 24 and 48 core runs respectively, relative to the single core run. Comparing to Table 2, this showed that decreasing the protein density, which results in a decrease in the number of force calculations, the most computationally expensive step, resulted in a lower scaling performance. However, for 2500 HEWL molecules at 15.0 mg/ml, we observed 17‐fold and 34‐fold, and for 5000 HEWL molecules, 21‐fold and 38‐fold reductions. Thus, increasing the total number of molecules improved the scaling performance. During the longer 500 ns simulations, in which each of the 256 HEWL molecules was able to switch between any of the three charge states used in this work, the pH 9 charge state was shown to be overwhelmingly more favorable. This is not surprising, as all HEWL molecules will repel each other due to their overall positive charge, while the pH 9 state has the lowest overall charge ( + 7 e, compared to + 8 e and + 13 e for pH 6 and pH 3, respectively). While this result does not represent a physically realistic experiment, as the different charge states relate to extremes in experimental conditions, we show it as a demonstration of how the ability to model “flexible solutes” can be used. More realistic results could be obtained by allowing transitions between a set of protonation states accessible at a given pH, or between different conformations of a flexible solute, as shown in the docking simulations of the linker histone H5 to the nucleosome. In these simulations, we were able to reproduce previously reported predictions33 in a more efficient manner. By allowing the nucleosome to sample a range of conformations during a single simulation, the required simulation time can be reduced, as the simulation is biased towards the most important complex‐forming configurations, and we can obtain the relative sampling of conformations within the lowest energy docked complexes (Table 3).

Concluding Remarks

SDA 7 is a major update of the SDA software. It has new functionalities including the possibility to account for different conformations of the solutes and parallelization on multi‐core processors for bimolecular simulations. All development branches of SDA have been consolidated into a single, common framework using an object‐oriented, easily extendable approach. The results obtained in the validation cases are consistent with the results of previous published studies. The SDA 7 code is available for download at http://mcm.h‐its.org/sda7, and is free of charge for noncommercial use. The website includes documentation to guide new users, as well as descriptions of example simulations that are included within the distribution. An SDA 7 webserver (webSDA) is also available at http://mcm.h‐its.org/webSDA. Supporting Information Click here for additional data file.
  30 in total

1.  Computer simulation of protein-protein association kinetics: acetylcholinesterase-fasciculin.

Authors:  A H Elcock; R R Gabdoulline; R C Wade; J A McCammon
Journal:  J Mol Biol       Date:  1999-08-06       Impact factor: 5.469

2.  Protein-protein association: investigation of factors influencing association rates by brownian dynamics simulations.

Authors:  R R Gabdoulline; R C Wade
Journal:  J Mol Biol       Date:  2001-03-09       Impact factor: 5.469

3.  Electrostatics of nanosystems: application to microtubules and the ribosome.

Authors:  N A Baker; D Sept; S Joseph; M J Holst; J A McCammon
Journal:  Proc Natl Acad Sci U S A       Date:  2001-08-21       Impact factor: 11.205

4.  Generalized Born model with a simple, robust molecular volume correction.

Authors:  John Mongan; Carlos Simmerling; J Andrew McCammon; David A Case; Alexey Onufriev
Journal:  J Chem Theory Comput       Date:  2007-01-01       Impact factor: 6.006

5.  Brownian dynamics simulation of protein solutions: structural and dynamical properties.

Authors:  Paolo Mereghetti; Razif R Gabdoulline; Rebecca C Wade
Journal:  Biophys J       Date:  2010-12-01       Impact factor: 4.033

6.  Characterization of the Ligand Receptor Encounter Complex and Its Potential for in Silico Kinetics-Based Drug Development.

Authors:  Karim M ElSawy; Reidun Twarock; David P Lane; Chandra S Verma; Leo S D Caves
Journal:  J Chem Theory Comput       Date:  2011-12-28       Impact factor: 6.006

Review 7.  Biomolecular electrostatics and solvation: a computational perspective.

Authors:  Pengyu Ren; Jaehun Chun; Dennis G Thomas; Michael J Schnieders; Marcelo Marucho; Jiajing Zhang; Nathan A Baker
Journal:  Q Rev Biophys       Date:  2012-11       Impact factor: 5.318

8.  Energetics of protein-protein interactions: analysis of the barnase-barstar interface by single mutations and double mutant cycles.

Authors:  G Schreiber; A R Fersht
Journal:  J Mol Biol       Date:  1995-04-28       Impact factor: 5.469

9.  Atomistic simulations of competition between substrates binding to an enzyme.

Authors:  Adrian H Elcock
Journal:  Biophys J       Date:  2002-05       Impact factor: 4.033

10.  Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules.

Authors:  Paolo Mereghetti; Michael Martinez; Rebecca C Wade
Journal:  BMC Biophys       Date:  2014-06-17       Impact factor: 4.778

View more
  24 in total

Review 1.  Molecular basis of high viscosity in concentrated antibody solutions: Strategies for high concentration drug product development.

Authors:  Dheeraj S Tomar; Sandeep Kumar; Satish K Singh; Sumit Goswami; Li Li
Journal:  MAbs       Date:  2016-01-06       Impact factor: 5.857

2.  Conformational selection and dynamic adaptation upon linker histone binding to the nucleosome.

Authors:  Mehmet Ali Öztürk; Georgi V Pachov; Rebecca C Wade; Vlad Cojocaru
Journal:  Nucleic Acids Res       Date:  2016-06-07       Impact factor: 16.971

3.  Improvements to the APBS biomolecular solvation software suite.

Authors:  Elizabeth Jurrus; Dave Engel; Keith Star; Kyle Monson; Juan Brandi; Lisa E Felberg; David H Brookes; Leighton Wilson; Jiahui Chen; Karina Liles; Minju Chun; Peter Li; David W Gohara; Todd Dolinsky; Robert Konecny; David R Koes; Jens Erik Nielsen; Teresa Head-Gordon; Weihua Geng; Robert Krasny; Guo-Wei Wei; Michael J Holst; J Andrew McCammon; Nathan A Baker
Journal:  Protein Sci       Date:  2017-10-24       Impact factor: 6.725

4.  NERDSS: A Nonequilibrium Simulator for Multibody Self-Assembly at the Cellular Scale.

Authors:  Matthew J Varga; Yiben Fu; Spencer Loggia; Osman N Yogurtcu; Margaret E Johnson
Journal:  Biophys J       Date:  2020-05-16       Impact factor: 4.033

5.  High affinity protein surface binding through co-engineering of nanoparticles and proteins.

Authors:  Moumita Ray; Giorgia Brancolini; David C Luther; Ziwen Jiang; Roberto Cao-Milán; Alejandro M Cuadros; Andrew Burden; Vincent Clark; Subinoy Rana; Rubul Mout; Ryan F Landis; Stefano Corni; Vincent M Rotello
Journal:  Nanoscale       Date:  2022-02-10       Impact factor: 7.790

6.  Calmodulin Gates Aquaporin 0 Permeability through a Positively Charged Cytoplasmic Loop.

Authors:  James B Fields; Karin L Németh-Cahalan; J Alfredo Freites; Irene Vorontsova; James E Hall; Douglas J Tobias
Journal:  J Biol Chem       Date:  2016-09-22       Impact factor: 5.157

Review 7.  Application of targeted mass spectrometry in bottom-up proteomics for systems biology research.

Authors:  Nathan P Manes; Aleksandra Nita-Lazar
Journal:  J Proteomics       Date:  2018-02-13       Impact factor: 4.044

8.  Computational Equilibrium Thermodynamic and Kinetic Analysis of K-Ras Dimerization through an Effector Binding Surface Suggests Limited Functional Role.

Authors:  Abdallah Sayyed-Ahmad; Kwang-Jin Cho; John F Hancock; Alemayehu A Gorfe
Journal:  J Phys Chem B       Date:  2016-05-11       Impact factor: 2.991

9.  Crucial HSP70 co-chaperone complex unlocks metazoan protein disaggregation.

Authors:  Nadinath B Nillegoda; Janine Kirstein; Anna Szlachcic; Mykhaylo Berynskyy; Antonia Stank; Florian Stengel; Kristin Arnsburg; Xuechao Gao; Annika Scior; Ruedi Aebersold; D Lys Guilbride; Rebecca C Wade; Richard I Morimoto; Matthias P Mayer; Bernd Bukau
Journal:  Nature       Date:  2015-08-05       Impact factor: 49.962

10.  Molecular Mechanics Study of Flow and Surface Influence in Ligand-Protein Association.

Authors:  Shivansh Kaushik; Chia-En A Chang
Journal:  Front Mol Biosci       Date:  2021-05-10
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.