| Literature DB >> 35927411 |
Yixuan Sun1, Surya Mitra Ayalasomayajula2, Abhas Deva2, Guang Lin3, R Edwin García4.
Abstract
The quantification of microstructural properties to optimize battery design and performance, to maintain product quality, or to track the degradation of LIBs remains expensive and slow when performed through currently used characterization approaches. In this paper, a convolution neural network-based deep learning approach (CNN) is reported to infer electrode microstructural properties from the inexpensive, easy to measure cell voltage versus capacity data. The developed framework combines two CNN models to balance the bias and variance of the overall predictions. As an example application, the method was demonstrated against porous electrode theory-generated voltage versus capacity plots. For the graphite|LiMn[Formula: see text]O[Formula: see text] chemistry, each voltage curve was parameterized as a function of the cathode microstructure tortuosity and area density, delivering CNN predictions of Bruggeman's exponent and shape factor with 0.97 [Formula: see text] score within 2 s each, enabling to distinguish between different types of particle morphologies, anisotropies, and particle alignments. The developed neural network model can readily accelerate the processing-properties-performance and degradation characteristics of the existing and emerging LIB chemistries.Entities:
Year: 2022 PMID: 35927411 PMCID: PMC9352700 DOI: 10.1038/s41598-022-16942-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Convolutional neural network architecture to infer microstructural battery parameters. The CNN is comprised of convolution blocks and fully connected layers, which takes two types of input at different stages. The model takes the color-encoded voltage versus capacity curves as the main input (each color corresponding to a current density), energy density, E, and power density, P, as the second input. Each convolution block has two convolutional layers, followed by a pooling layer. A ReLU activation function is placed after each convolutional layer and hidden dense layer. For each data point, the image with voltage curves are fed into the network. For each curve, E and P are taken into the following fully connected layers, along with the higher-level representation of the input image. The output of this network has two components, the Bruggeman exponent, , and the area density shape factor, S. See text for details.
Network architecture description.
| Operation layers | Number of filters | Kernel size | Stride | Padding | Output size | |
|---|---|---|---|---|---|---|
| Input voltage curves | – | – | – | – | – | |
| Convolution layer | ReLU | 64 | Same | |||
| Convolution layer | ReLU | 64 | Same | |||
| Pooling | max pooling | – | Same | |||
| Convolution layer | ReLU | 32 | Same | |||
| Convolution layer | ReLU | 32 | Same | |||
| Pooling | max pooling | – | Same | |||
| Convolution layer | ReLU | 16 | Same | |||
| Convolution layer | ReLU | 16 | Same | |||
| Pooling | max pooling | – | Same | |||
| Extra input (E, P) and flattening | ReLU | – | Same | |||
| Dense layer | ReLU | – | – | – | – | 512 |
| Dense layer | ReLU | – | – | – | – | 128 |
| Output | – | – | – | – | – | 1 |
| Output | – | – | – | – | – | 1 |
Figure 2Aggregated true and predicted scatter plots from tenfold cross validation for (a) Bruggeman’s exponent from -model with and , (b) from -model with and , (c) shape factor S from -model with and , and (d) S from -model with and . Overall, the trained model accurately predicts both and S. Specifically, the -model performed better at predicting S with 1.70% less error throughout the range of S values, while the -model was better at predicting by over 5% in comparison to the -model. A combination of the model predictions was adopted for the final prediction.
Figure 3Residual analysis of the proposed models, showing the normalized residual plots, their densities, and the corresponding Q–Q plots. (a) residuals from the -model as given by Eq. (3). Results show that -model underpredicts by 3% for and overpredicts by 5.0% otherwise. The mean of the residuals is greater than zero, i.e., overall the model underpredicted . The corresponding Q–Q plot suggests a near symmetric Gaussian distribution of residuals with a slight right-skew. (b) residuals from the -model shows the residuals are more centered around zero with larger values than the -model. The corresponding Q–Q plot indicates a near symmetric Gaussian distribution of residuals with heavy tails. (c) S residuals from the -model shows an overall underprediction. The associated Q–Q plot shows Gaussian distribution of residuals with a slight right-skew. (d) Shows the residuals of S from the -model are centered around zero. The corresponding Q–Q plot suggests a near symmetric Gaussian distribution of residuals with heavy tails.
Figure 4as computed from the tenfold cross-validation from the -model (green) and -model (blue) performance. (a) Shows the effect of . (b) Shows the effect of S. A lower value means the model prediction is better in that range of values.
Figure 5Expected, , and CNN-predicted, , galvanostatic behavior for representative battery microstructures. Inset (a) compares experimental data, , and traditional porous electrode theory response, as reported by Doyle and Newman[74], by using traditionally assumed values, and , while CNN-predicted values are and . In inset (b) expected values corresponds to and , while CNN-model generated values are and . The root mean squared, RMS, deviation in galvanostatic behavior of the CNN-model prediction with respect to the expected values, show a value of 1.5%. Inset (c) corresponds to dual porous structure with low porosity, with expected values of, and , while the CNN-model generated values are and . The RMS deviations are less than 0.05 %. Inset (d) corresponds to a distribution of highly textured (aligned, MRD > 20), morphologically anisotropic particles (c/a 1/10) with expected values of and , while the CNN-model generated parameters are and . A maximum RMS deviation of 13.3% is observed. Inset (e) expected values are, and , and the CNN-model predicted values are, and . The RMS deviations are less than 0.15%. Inset (f) expected values correspond to and , while the CNN-model predicted values are and . The maximum RMS deviation is 5.5 % and the minimum RMS deviation is 0.67%.