Literature DB >> 35936410

Predicting Lattice Vibrational Frequencies Using Deep Graph Neural Networks.

Nghia Nguyen1, Steph-Yves V Louis1, Lai Wei1, Kamal Choudhary2,3, Ming Hu4, Jianjun Hu1.   

Abstract

Lattice vibrational frequencies are related to many important materials properties such as thermal and electrical conductivity as well as superconductivity. However, computational calculation of vibrational frequencies using density functional theory methods is computationally too demanding for large number of samples in materials screening. Here we propose a deep graph neural network based algorithm for predicting crystal vibrational frequencies from crystal structures. Our algorithm addresses the variable dimension of vibrational frequency spectrum using the zero padding scheme. Benchmark studies on two data sets with 15,000 mixed-structure and 35,552 rhombohedra samples show that the aggregated R 2 scores of the prediction reach 0.554 and 0.724. We also evaluate the structural transferability by predicting the vibration frequencies for 239 individual cubic target structures. The R 2 scores for more than 40% of the targets are greater than 0.8 and can reach as high as 0.98 for the model trained with mixed samples, while the average mean absolute error is 43.69 Thz showing low transferability across structure types. Our work demonstrates the capability of deep graph neural networks to learn to predict lattice vibration frequency when sufficient number of training samples are available.
© 2022 The Authors. Published by American Chemical Society.

Entities:  

Year:  2022        PMID: 35936410      PMCID: PMC9352222          DOI: 10.1021/acsomega.2c02765

Source DB:  PubMed          Journal:  ACS Omega        ISSN: 2470-1343


Introduction

Almost all solids, such as crystals, amorphous solids, glasses, and glass-like materials have an ordered, disordered, or hybrid ordered/disordered arrangement of atoms. Due to the thermal fluctuation, all atoms in a solid phase vibrate with respect to their equilibrium positions. The existence of a periodic crystal lattice in solid materials provides a medium for characteristic vibrations. The quantized, collective vibrational modes in solid materials are called phonons. The study of phonons serves an important part in solid-state physics, electronics, and photoelectronics, as well as other emerging applications in modern science and technology, as they play an essential role in determining many physical and chemical properties of solids, including the thermal and electrical conductivities of most materials. Lattice vibrations have long been used for explaining sound propagation in solids, thermal transport, elastic and optical properties of materials, and even photoassisted processes, such as photovoltaics. For instance, there are numerous studies that explore the determinant role of electron–phonon coupling in heat conduction,[1−7] superconductivity,[8−12] and photoelectronics.[13−16] The acoustic branch vibration mode softening has been identified as the mechanism of superconducting transition rather than the Fermi surface nesting in platinum diselenide, a type-II Dirac semi-metal.[17] A previous study also illustrates the pivotal role played by electron–phonon coupling in photocurrent generation in photovoltaics.[18] Phonon-assisted up conversion photoluminescence has been experimentally observed for CdSe/CdScore/shell quantum dots,[19] which could be exploited as efficient, stable, and cost-effective emitters in various applications. Therefore, predicting the basic behaviors of lattice vibrations, i.e., the lattice vibrational frequencies, is beneficial toward future design of novel materials with controlled or tailored elastic, thermal, electronic, and photoelectronic properties. Despite the great importance of predicting vibrational properties of crystalline materials, high-fidelity computing of lattice vibrational frequencies using a considerably large data set is not an easy task.[20] The traditional method to obtain the vibrational frequencies of a lattice is diagonalizing the dynamical matrix of a crystal structure to get its eigenvalues (frequencies). Herewith, we restrict all of our discussions to the Γ-point frequency only. The difficulty lies in evaluating the large amount of interatomic force constants (IFCs) of a lattice in a highly efficient and accurate fashion, which is required for obtaining the dynamical matrix associated with the vibrational frequencies. Depending on the symmetry, composition, and structural complexity (such as number of species and their ratio) of the crystal, IFC calculations could be time and resource consuming. In any case, the IFCs calculation can be accomplished by either a quantum-mechanical approach, which can be used to obtain a phonon’s dispersion relation and even anharmonicity, or a semiclassical treatment of lattice vibrations, which solves Newton’s law of mechanics with empirical interatomic potentials. However, the quantum-mechanical approach, despite its high accuracy, cannot be used to evaluate or predict the lattice vibrational frequencies of a large amount of crystals with diverse compositions and lattice complexities, due to its high demand and unbearable computation cost. On the other hand, the empirical potential method, although very fast compared to the quantum-mechanical approach, fails to give satisfactory results most of the time. For example, if the interatomic interactions are not accurately calculated, the dynamical matrix could be ill defined and as a result there could be negative values in the obtained frequencies. To this end, developing some algorithms that can accurately and quickly screen and evaluate a large number of crystals will be very promising for high-throughput computing and novel materials design. Big data and deep learning approaches have already brought a transformative revolution in computer vision, autonomous cars, and speech recognition in recent years. Machine learning and deep learning algorithms have been increasingly applied in materials property prediction[21−26] and materials discovery.[27,28] It has been well-acknowledged that machine learning has the potential to accelerate novel materials discovery by predicting materials properties at very low computational cost and maintaining high accuracy sometimes even comparable to first-principles level at the same time. Although most of the time training a good machine learning model would require a decent amount of high-quality data, which is usually obtained through high-precision ab initio simulations, the machine learning model is very efficient and attractive for screening and predicting large amounts of unexplored structures and data, which is orders of magnitude faster than traditional one-by-one computation. Among all of the methods for materials property prediction, the structure-based graph neural networks have demonstrated,[23] the best overall performance with big advantage over composition-based methods and heuristic structure feature-based approaches. In the field of lattice vibration (phonon), their potential has yet to be implemented due to the inherent difference between materials data and image/audio data, and lack of sufficient materials data. Since the vibrational frequencies of a crystalline material strongly depend on its atomic structure and the structural patterns strongly relevant to this property are not well understood, it is highly expected that the strong learning capability of deep graph neural networks’ representation can be used to train deep learning models for vibrational-frequency prediction. Benefited from 15,000 mixed-type structures and 35,552 rhombohedral structures with Γ-frequencies that we have recently calculated, this work presents a new development of graph neural network and deploys the trained neural network model to predict lattice vibrational frequencies of crystal materials. Benchmark studies on these two data sets showed that our deeperGATGNN model can achieve very good performance with an R2 score of 0.724 when the model is trained and tested with the rhombohedron crystal structures. It also shows good performance when applied to predict cubic crystal structures. The model performance on the smaller data set with mixed crystal structures is lower with an R2 score of 0.556. To the best of our knowledge, this is the first work that uses a deep (graph) neural network to study phonon frequencies.

Methods

Data

To evaluate the performance of our graph neural network model for vibrational-frequency prediction, we prepared two data sets. The first data set is the Rhombohedron data set which is composed of 35,552 rhombohedral crystal structures obtained by density functional theory (DFT) relaxation of the generated cubic structures of three prototypes (ABC6, ABC6D6, and ABCD6) by our cubicGAN algorithm, a deep learning based cubic structure generator.[28] The second data set consists of 15,000 crystal structures with mixed crystal systems. For the Rhombohedron data set, we split it into a training set with 28,441 samples and a test set with 7,111 samples. For the Mix data set, we split it into a training set with 12,000 samples and a testing set with 3,000 samples. The calculation processes of both data sets are described below.

Data Calculation and Collection

All of the first-principles calculations are carried out using the projector augmented wave (PAW) method as implemented in the Vienna ab initio simulation package (VASP) based on DFT.[29,30] Please note commercial software is identified to specify procedures. Such identification does not imply recommendation by National Institute of Standards and Technology (NIST). The initial crystal structures were taken from the Materials Project database. We then optimized each crystal structure with both the atomic positions and lattice constants fully allowed to relax in spin-unrestricted mode and without any symmetry constraints. The maximal Hellmann–Feynman force component was smaller than 10–3 eV/A, and the total energy convergence tolerance was set to be 10–6 eV. The Opt-B88vdW functional was taken into account to deal with the long-term interactions in the exchange–correlation interaction.[31] All Γ-point frequencies were calculated using VASP. The Γ-point frequencies were extracted from elastic constant calculations using VASP with parameters IBRION = 6 and NFREE = 4, where the Hessian matrix (matrix of the second derivatives of the energy with respect to the atomic positions) and the Γ-point vibrational frequencies of a system can be determined by the finite displacement difference method. The k-points for such elastic constant calculation were generally 4 by 4 by 4 for most of the systems, while for some large cell systems we reduce the k-points to 2 by 2 by 2. The focus of this work is on training and the prediction of vibrational frequency, while the elastic constant data are used for training other models in a separate work.

Constructing Training and Testing Data Sets

For each crystal structure, we parse its OUTCAR file for vibrational frequencies. Because some of the vibrational frequencies are imaginary, they would be represented as negative values. Additionally, since each crystal structure has a variable number of atoms, the output has a variable number of vibrational frequencies. Therefore, we first identify the crystal with the largest number of atoms to determine the maximum number of frequencies to predict. For instance, since the crystal with the largest number of atoms in our data set has 14 atoms, it would have 42 vibrational frequencies. Then, the output vector dimension is set as 42 for all crystal structures in the data set, formatted as [first frequency, second frequency, third frequency, ..., 42nd frequency]. If the number of vibrational frequencies is less than 42, the remaining values are padded with zero.

Definition of the Vibrational-Frequency Prediction Problem: Task Modeling

We approach the vibrational-frequency prediction task as a variable-dimension regression problem (Figure ). For an input POSCAR file, we need to predict its vibrational frequencies as a vector of variable dimension. While we recognize that calculation of many materials properties would require the full phonon dispersion and even corresponding phonon modes, in this study, we focus on the vibration-frequency prediction.
Figure 1

Representative atomic structure of AlB2 (a) and corresponding phonon dispersions (b). The number of phonon frequencies is triple the number of atoms within the unit cell.

Representative atomic structure of AlB2 (a) and corresponding phonon dispersions (b). The number of phonon frequencies is triple the number of atoms within the unit cell.

Scalable Global Attention Graph Neural Network

To learn the sophisticated structure to property relationship between the crystals and their vibrational frequency, we use our recently developed scalable deeper graph neural networks with a global attention mechanism.[32] Our deeperGATGNN model (Figure ) is composed of a set of augmented graph attention layers with ResNet style skip connections and differentiable group normalization to achieve complex deep feature extractions. After several such feature transformation steps, a global attention layer is used to aggregate the features at all nodes and a global pooling operator is further used to process the information to generate a latent feature representation for the crystal. This feature is then mapped to the vibrational frequencies using a few fully connected layers. To train the model, first we convert all crystal structures of the data set into graph structures using a radius threshold of 8 Å and the maximum number of neighbor atoms to be 12. The graph representation of our data set allows us to automatically achieve translation and rotation invariant feature extraction.
Figure 2

Architecture of the deeperGATGNN neural network. It is composed of several graph convolution layers with differentiable normalization and skip connections plus a global attention layer and final fully connected layers. Reproduced with permission from ref (32). Copyright 2022 Elsevier (in Patterns).

Architecture of the deeperGATGNN neural network. It is composed of several graph convolution layers with differentiable normalization and skip connections plus a global attention layer and final fully connected layers. Reproduced with permission from ref (32). Copyright 2022 Elsevier (in Patterns). One of the major advantages of our deeperGATGNN model for materials property prediction lies in its high scalability and state-of-the-art prediction performance as benchmarked over six data sets.[32] The scalability allows us to train a very deep network with 10 or more graph attention layers to achieve complex feature extraction without the performance degradation that many other graph neural networks suffer due to the oversmoothing issue. Another advantage is that the deeperGATGNN model has demonstrated good performance without the need of computationally expensive hyperparameter tuning. The only major parameter is the minimum number of graph attention layers.

Differentible Group Normalization

One of the key issues of standard graph neural networks is the oversmoothing problem, which leads to the homogenization of the node representation with the stacking of an increasing number of graph convolution layers. To address this issue and build a deeper graph neural network, we used a differentiable group normalizer[33] to replace the standard batch normalization. This operator first tries to cluster the nodes on the basis of their representation and then cluster them and do normalization for each cluster.

Residual Skip Connection

We also added a set of residual skip connections to our GATGNN models, which is a well-known strategy to allow training of deeper neural networks as first introduced in the ResNet framework[34] and later used in graph neural networks too.[35] For each of our graph convolution layers, we added one skip connection to it.

Evaluation Measures

Our study uses a graph neural network to create a model that predicts vibrational frequency. In order to evaluate its performance, we use mean absolute error (MAE) and the coefficient of determination (R2). Their formulas are as shown below:where n is the number of data points and y and ŷ are respectively the actual and predicted values for the ith data point in the data set. The variable y̅ is the mean value of all of the y data points. In Figures , 5, and 8, the R2 value represents the proportion of the variation of the predicted frequencies that is predictable from the actual frequencies, in accordance with their linear regression lines.
Figure 3

Performance of deeperGATGNN for vibrational-frequencies prediction over the Rhombohedron data set. The scatter plot shows the predicted versus ground truth vibrational frequency for all test materials.

Figure 5

Performance of deeperGATGNN for vibrational-frequencies prediction over the Mix data set. The scatter plot shows the predicted versus ground truth vibrational frequency for all test materials.

Figure 8

Prediction performance of vibrational frequencies by deeperGATGNN. Group one: (a–c, h) structures of four materials Fe2H6, B6H18O18, B48O6, and Be2BH3O5 along with their predicted vibrational frequencies (d–f, (k) and the regression R2 scores of 0.98, 0.968, 0.954, and 0.95, respectively. The vibrational frequencies of this group are spread all over the whole range. Group two: (g, i) structures of two materials C44F28 and C120F36 and their predicted frequencies (j, l) with R2 scores of 0.953 and 0.947, respectively. Their vibrational frequencies are clustered at two ends of the frequency range.

Performance of deeperGATGNN for vibrational-frequencies prediction over the Rhombohedron data set. The scatter plot shows the predicted versus ground truth vibrational frequency for all test materials.

Experimental Results

Overall Performance of Vibrational-Frequency Prediction

We first trained a deeperGATGNN model over the more homogeneous structure data set, the Rhombohedron data set for vibrational-frequency prediction. We randomly picked 28441 samples for training and a remaining 7111 samples for testing. The following hyperparameters are used for our graph neural network model training: learning rate = 0.004, graph convolution layers = 10, and batch size = 128. No dropout is used as it always deteriorates the prediction performance. We calculate the MAE for both testing samples and training samples respectively. The average MAE for the training samples is 4.28943 THz, while the average MAE for the testing samples is 4.28879 THz. To further check the model performance, we show the predicted vibrational frequencies versus the ground truth values for all of the test samples in the same scatter plot as shown in Figure . First, we find that most of the points are located around the diagonal indicating a high prediction performance, with its R2 score reaching 0.724. There are a few outliers gathering around the low-frequency ground truth area. The majority of prediction errors occur for points on the bottom line where a certain proportion of ground truth vibrational frequencies are predicted as zero, which may be due to the systematic unbalance of the data set with a majority of positive vibration frequencies that our current model cannot handle well. But overall, a majority of vibrational frequencies have been predicted correctly as shown in Figure with high precision. To check the generalization performance of our deeperGATGNN model for vibrational-frequency prediction, we plot the histogram of the prediction MAEs over both the training set and the test set of our Rhombhedron data set (Figure ). It is found that most frequency MAEs are around 2.5 THz, while there is another small peak around 9 THz. It is interesting to find that the MAE histogram over the test set has very similar distribution, indicating the good generalization performance of our model for vibrational-frequency prediction.
Figure 4

Histograms of MAE prediction errors over the training samples and the testing samples for the Rhombhedron data set.

Histograms of MAE prediction errors over the training samples and the testing samples for the Rhombhedron data set. In order to further verify the performance of our deeperGATGNN model, we trained another model using the Mix data set with more complex and diverse structures compared to the Rhombhedron data set, which has 15000 crystal structures. We used a training set with 12000 samples and a testing set with 3000 samples and then calculated the MAEs and R2 score. As shown in Figure , the scatter plot of the predicted vibrational frequencies versus the ground truth values for all test materials has a much wider distribution around the regression line compared to the result in Figure . The R2 score here is 0.556, which is significantly lower than 0.724 obtained for the Rhombhedron data set, indicating the much higher challenge in predicting the vibrational frequency of mixed structures. Another possible reason is that the Mix data set has a much smaller number of samples: 15000 versus 35550. However, we can still see that our deeperGATGNN model has achieved a reasonably good performance overall, as shown by the clear trend of the regression line. Performance of deeperGATGNN for vibrational-frequencies prediction over the Mix data set. The scatter plot shows the predicted versus ground truth vibrational frequency for all test materials. To check the generalization performance of our deeperGATGNN model on the Mix data set, we show the MAE distributions for both the training set and the testing set in Figure . We find that the MAE histograms of the training set and the testing set from the Mix data set are almost the same, indicating its good generalization performance. An interesting observation is that the MAE distribution for the Mix data set has only one peak, while it has two peaks as shown in Figure .
Figure 6

Histograms of MAE prediction errors over the training samples and the testing samples for the Mix data set

Histograms of MAE prediction errors over the training samples and the testing samples for the Mix data set

Training Process and Effect of Training Set Size

To understand the model training process of the deeperGATGNN model for vibrational frequency, we plotted the training and validation errors during the training process as shown in Figure a. It can be found that the training error keeps going until becoming stagnant, while the larger validation errors also go down and become stable after about 300 epochs, indicating the good fitting of the model (no overfitting). We further checked how the training set size may affect the model performance by training different models using a different number of training samples of the Rhombhedron data set. The results are shown in Figure b. We found that the prediction MAEs keep going down when more training samples are used. But when the training sample number reaches 20000, there is no significant performance improvement.
Figure 7

Characteristics of the deeperGATGNN model training process. (a) MAE changes during training. (b) How training set size affects performance

Characteristics of the deeperGATGNN model training process. (a) MAE changes during training. (b) How training set size affects performance

Hyperparameter Study

It is well-known that hyperparameters of graph neural networks might strongly affect their final performance. To figure out their impact and obtain the optimal settings, we conducted a series of hyperparameter tuning experiments. The main hyperparameters of our model include the number of graph convolution layers, the learning rate, the batch size, and the dropout rate (for controlling the overfitting issue). The results are shown in Table . First we found that whenever we add the dropout to our model, it leads to worse performance, which is in contrast to the deep neural network models in the computer vision. So no dropout is used in our experiments. Second, we find that with a given learning rate ranging from 0.001 to 0.005, the larger batch size (256) usually generates lower performance compared to the result with batch size 128. The optimal performance is obtained with learning rate 0.004, 10 graph convolution (AGAT) layers, and batch size of 128 for all experiments on both data sets.
Table 1

Prediction Performance (MAEs (THz)) of Different Parameter Settings

 learning rate 0.001
learning rate 0.002
learning rate 0.003
learning rate 0.004
learning rate 0.005
AGAT layersbatch size 128batch size 256batch size 128batch size 256batch size 128batch size 256batch size 128batch size 256batch size 128batch size 256
51.9482.3311.6421.8931.6761.7401.5381.5271.3891.459
102.1982.5041.7581.9271.5191.9451.4701.5401.5241.761
151.9992.3921.5971.9691.5931.6891.5341.5071.5231.539
202.8112.9301.5812.4031.4591.7671.4771.5961.5391.513

Case Analysis of Prediction Quality of Different Target Materials

To further understand how the deeperGATGNN model performs for the vibrational-frequency prediction, we used our model trained with the Mix data set to predict 100 test samples and show results of six crystal structures with high prediction accuracy R2 scores, including Fe2H6, B6H18O18, B48O6, C44F28, Be2BH3O5, and C120F36. The six case study target materials contain binary, ternary, and quaternary materials with diverse structures. The numbers of atoms within their unit cells range from 8 to 156. In Figure , we present each of the target structures and their scatter plots showing the predicted vibrational frequencies versus the ground truths. We can divide them into two groups for discussion on the basis of the distribution of their vibrational frequencies. In group one, the frequencies are coarsely distributed evenly within the whole range of their vibrational frequencies, as shown in Figure d–f,k. This group includes Fe2H6, B6H18O18, B48O6, and Be2BH3O5. For this group of materials, our deeperGATGNN model achieves very good performance with the R2 scores of 0.98, 0.968, 0.954, and 0.95, respectively. In group two, the vibrational frequencies are distributed within two extreme clusters at the two ends of the frequency range, as shown in Figure j,l. It includes two materials: C44F28 and C120F30. Usually these types of distributions are difficult to achieve good regression results for However, our prediction model obtains high R2 scores of 0.953 and 0.947 for C44F28 and C120F30, respectively. Overall, we find the R2 scores are all above 0.9 for all six target structures: the best score is 0.98 for Fe2H6, and the lowest one is 0.947 for C120F36. However, despite the high R2 scores, we find that the predicted absolute values are very different from the true values by DFT with the average MAE being 43.7 Thz. We notice that the predicted vibrational frequencies show very high linear correlations with the true frequencies, which, however, differing for each material. To exploit the linear relationship for improving the vibration-frequency prediction, we train two composition-based neural network models to predict the slope and intercept for the linear relationship for each material so that the linear model can map raw output from our graph neural networks to their final predictions. We use the Roost algorithm,[36] a composition-based graph neural network algorithm for composition-based property prediction, to train the slope and intercept linear model using the calculated linear models. We then use them to map the deeperGATGNN predicted vibration frequency to calibrated values. We find that the average MAE can be reduced to 33 Thz. Prediction performance of vibrational frequencies by deeperGATGNN. Group one: (a–c, h) structures of four materials Fe2H6, B6H18O18, B48O6, and Be2BH3O5 along with their predicted vibrational frequencies (d–f, (k) and the regression R2 scores of 0.98, 0.968, 0.954, and 0.95, respectively. The vibrational frequencies of this group are spread all over the whole range. Group two: (g, i) structures of two materials C44F28 and C120F36 and their predicted frequencies (j, l) with R2 scores of 0.953 and 0.947, respectively. Their vibrational frequencies are clustered at two ends of the frequency range. To check the individual structure level R2 performance of our model for vibrational-frequency prediction, we plot a histogram of all R2 scores for the 239 cubic test structures whose vibration frequencies are predicted by the model trained with the Mix data set (Figure ). We find that the overall performance is very strong with more than 55% of them having R2 scores greater than 0.65 and more than 40% of them having R2 scores more than 0.8. However, we find that the average MAE for these 239 test structures is 43.7 Thz, which is relatively high. This demonstrates that our deeperGATGNN model has a certain but limited transferability for vibrational-frequency prediction across structure types: the lack of sufficient cubic samples in the Mix data set impedes the prediction performance over these cubic structures.
Figure 9

R2 performance of deeperGATGNN for 239 target hold-out test samples.

R2 performance of deeperGATGNN for 239 target hold-out test samples. Before closing, it is worth pointing out the advantage of our trained models in predicting negative vibrational frequencies. In our training data, we include the negative vibrational frequencies in the training process. Therefore, after training, our model automatically has the capability to predict negative vibrational frequencies of new structures. In materials science, it is well-known that negative vibrational frequencies usually mean the corresponding structures are either not thermodynamically stable at all, i.e., likely to decompose into substances with lower energies, or only not stable in a certain temperature range, i.e., likely to undergo a phase transition into a different space group. In either case, prediction of the negative vibrational frequencies is valuable for large-scale material screening. For example, one can use the trained model to filter out materials that are not stable. We have used our model to predict the vibrational frequencies of new structures, and we do find that a large portion of the structures have negative vibrational frequencies. We further checked the formation energy and energy above the hull of those structures with negative vibrational frequencies, and we found that significant amounts of structures are not thermodynamically stable in terms of positive formation energy and high (positive) energy above the hull values. It is also worth pointing out that, the Γ-point-frequency prediction using the machine learning approach is the very first step in the thermal science community and understanding more phonon-related material properties would require the knowledge from a full phonon spectrum and corresponding phonon modes. M.H.’s group is currently training large-scale neural network models to predict full phonon dispersions and related phonon modes, based on more time- and resource-consuming DFT calculations. Those results will be reported in separate subsequent publications in the near future.

Conclusion

We have proposed a deep global graph attention neural network algorithm for the prediction of vibrational frequency of a given crystal material given their structure information. We formulate it as a variable-dimension vector target regression problem. Extensive experiments on two data sets with 35552 and 15000 samples show that our graph network model can handle the varying sizes of the training samples and can predict the vibrational frequency with good performance for the rhombohedral crystal materials with R2 score reaching 0.724. For the data set with mixed structures, the vibrational-frequency prediction is much more challenging with the R2 score around 0.556. However, we find that our model has low structural transferability when the model trained with mixed samples is used to predict the vibration frequencies of cubic structures, which leads to high MAEs of the predicted vibration frequencies despite the high correlations of the predictions with the ground truths. We find increasing the number of training samples can significantly reduce the prediction error, which is widely recognized in other materials property prediction tasks. Further research such as collecting more training data with diverse structures or algorithm improvement is needed to build more accurate models and to improve the transferability of the trained models for phonon vibrational-frequency prediction.
  13 in total

1.  Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set.

Authors: 
Journal:  Phys Rev B Condens Matter       Date:  1996-10-15

2.  Distinct Signatures of Electron-Phonon Coupling Observed in the Lattice Thermal Conductivity of NbSe3 Nanowires.

Authors:  Lin Yang; Yi Tao; Jinyu Liu; Chenhan Liu; Qian Zhang; Manira Akter; Yang Zhao; Terry T Xu; Yaqiong Xu; Zhiqiang Mao; Yunfei Chen; Deyu Li
Journal:  Nano Lett       Date:  2018-12-14       Impact factor: 11.189

3.  External electric field driving the ultra-low thermal conductivity of silicene.

Authors:  Guangzhao Qin; Zhenzhen Qin; Sheng-Ying Yue; Qing-Bo Yan; Ming Hu
Journal:  Nanoscale       Date:  2017-06-01       Impact factor: 7.790

4.  Electron-phonon interaction and superconductivity in the high-pressure cI16 phase of lithium from first principles.

Authors:  Sheng-Ying Yue; Long Cheng; Bolin Liao; Ming Hu
Journal:  Phys Chem Chem Phys       Date:  2018-10-31       Impact factor: 3.676

5.  Strong electron-phonon interaction retarding phonon transport in superconducting hydrogen sulfide at high pressures.

Authors:  Jia-Yue Yang; Ming Hu
Journal:  Phys Chem Chem Phys       Date:  2018-09-13       Impact factor: 3.676

6.  Phonon-Assisted Ultrafast Charge Transfer at van der Waals Heterostructure Interface.

Authors:  Qijing Zheng; Wissam A Saidi; Yu Xie; Zhenggang Lan; Oleg V Prezhdo; Hrvoje Petek; Jin Zhao
Journal:  Nano Lett       Date:  2017-09-19       Impact factor: 11.189

7.  Scalable deeper graph neural networks for high-performance materials property prediction.

Authors:  Sadman Sadeed Omee; Steph-Yves Louis; Nihang Fu; Lai Wei; Sourin Dey; Rongzhi Dong; Qinyang Li; Jianjun Hu
Journal:  Patterns (N Y)       Date:  2022-04-27

8.  High-Throughput Discovery of Novel Cubic Crystal Materials Using Deep Generative Neural Networks.

Authors:  Yong Zhao; Mohammed Al-Fahdi; Ming Hu; Edirisuriya M D Siriwardane; Yuqi Song; Alireza Nasiri; Jianjun Hu
Journal:  Adv Sci (Weinh)       Date:  2021-08-05       Impact factor: 16.806

9.  Phonon-assisted up-conversion photoluminescence of quantum dots.

Authors:  Zikang Ye; Xing Lin; Na Wang; Jianhai Zhou; Meiyi Zhu; Haiyan Qin; Xiaogang Peng
Journal:  Nat Commun       Date:  2021-07-13       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.