| Literature DB >> 35548086 |
Shivaji D Pawar1,2, Kamal K Sharma3, Suhas G Sapate4, Geetanjali Y Yadav5, Roobaea Alroobaea6, Sabah M Alzahrani6, Mustapha Hedabou7.
Abstract
Percentage mammographic breast density (MBD) is one of the most notable biomarkers. It is assessed visually with the support of radiologists with the four qualitative Breast Imaging Reporting and Data System (BIRADS) categories. It is demanding for radiologists to differentiate between the two variably allocated BIRADS classes, namely, "BIRADS C and BIRADS D." Recently, convolution neural networks have been found superior in classification tasks due to their ability to extract local features with shared weight architecture and space invariance characteristics. The proposed study intends to examine an artificial intelligence (AI)-based MBD classifier toward developing a latent computer-assisted tool for radiologists to distinguish the BIRADS class in modern clinical progress. This article proposes a multichannel DenseNet architecture for MBD classification. The proposed architecture consists of four-channel DenseNet transfer learning architecture to extract significant features from a single patient's two a mediolateral oblique (MLO) and two craniocaudal (CC) views of digital mammograms. The performance of the proposed classifier is evaluated using 200 cases consisting of 800 digital mammograms of the different BIRADS density classes with validated density ground truth. The classifier's performance is assessed with quantitative metrics such as precision, responsiveness, specificity, and the area under the curve (AUC). The concluding preliminary outcomes reveal that this intended multichannel model has delivered good performance with an accuracy of 96.67% during training and 90.06% during testing and an average AUC of 0.9625. Obtained results are also validated qualitatively with the help of a radiologist expert in the field of MBD. Proposed architecture achieved state-of-the-art results with a fewer number of images and with less computation power.Entities:
Keywords: BIRADS Density Classification; DenseNet; breast cancer; deep learning; mammographic breast density; multichannel architecture
Mesh:
Year: 2022 PMID: 35548086 PMCID: PMC9081505 DOI: 10.3389/fpubh.2022.885212
Source DB: PubMed Journal: Front Public Health ISSN: 2296-2565
Figure 1The proposed multichannel architecture for mammographic breast density classification.
Figure 2BIRADS classification-(A) fatty-class A (B) fat with some fibro glandular tissue -class B (C) heterogeneous dense-class C (D) extremely dense-class D (Image courtesy: Densebreast-info.org).
Input dataset used for testing and validation of proposed algorithm.
|
|
|
|---|---|
| Class-A | 200 |
| Class-B | 200 |
| Class-C | 200 |
| Class-D | 200 |
| Total | 800 |
Figure 3Input raw images- (A) Left_MLO (B) Left_CC (C) Right_MLO (D) Right_CC.
Figure 4Output Images after segmentation and cropping - (A) Left_MLO (B) Left_CC (C) Right_MLO (D) Right_CC.
Figure 5Contrast enhancement of input images.
Figure 6Conversion of grayscale image appears as an RGB.
Figure 7The proposed multichannel Dense-Net Framework for BIRADS classification.
Figure 8The architecture of dense layer.
Technical specification of proposed architecture.
|
|
| |
|---|---|---|
|
|
|
|
| Convolution | 112 by 112 | Kernel 7 × 7 stride 2 |
| Pooling | 56 by 56 | 3 × 3 Max.Pooling, stride 2 × 2 |
| Dense block 1 | 56 by 56 | [1 × 1 Conv] × 6 |
| Transition 1 | 56 × 56 | Batch normalization layer and a 1 × 1 convolution layer followed by 2 × 2 average pooling layer |
| 28 × 28 | ||
| Dense block 2 | 28 × 28 | 1 × 1 Conv] × 12 |
| Transition 2 | 28 × 28 | Batch normalization layer and a 1 × 1 convolution layer followed by 2 × 2 average pooling layer |
| 14 × 14 | ||
| Dense block 3 | 14 × 14 | 1 × 1 Conv] × 24 |
| Transition layer 3 | 14 × 14 | Batch normalization layer and a 1 × 1 convolution layer followed by 2 × 2 average pooling layer |
| 7 × 7 | ||
| Dense block 4 | 7 × 7 | 1 × 1 Conv] × 16 |
| Classification layer | 1 × 1 | 7 × 7 global average pool |
| 1000D fully connected, SoftMax | ||
Figure 9The distribution of image data.
Setting of hyperparameters used during experiment.
|
|
|
|---|---|
| Model | Multichannel-Dense Net |
| No. of channel | 04 |
| Model initial learning rate | 0.1 |
| Image size | 320 × 320 × 3 |
| Batch size | 04 |
| Target labels | Ground Truth |
| Data augmentation | Not used |
| Loss function | Categorical cross-entropy |
| Optimization algorithm | Stochastic gradient decent |
| Validation parameter | Classification accuracy |
Figure 10Training phase performance of the model (A) model accuracy and (B) model loss.
Figure 11Validation results of the proposed model in phase-II (A) model accuracy (B) model loss.
Figure 12(A) The Heat map (B) and the ROC curve of the proposed model.
Performance parameter of the proposed method.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| Predominantly fatty-class A | 1 | 0.866 | 0.92 | 0.9006 | 0.9625 |
| Fat with some fibro glandular tissue class B | 0.77 | 0.77 | 0.755 | ||
| Heterogeneous dense-class C | 0.857 | 1 | 0.922 | ||
| Extremely dense-class D | 0.90 | 1 | 0.947 |
Comparative status of the proposed method with current state-of-the-art methods.
|
|
|
|
|
|---|---|---|---|
| Wu et al. ( | 2,00,000 | A deep convolutional network with 100 layers. | 0.825 on Four views |
| Ciritsis et al. ( | 20,578 | A deep convolutional network with 11 layers and performed analysis separately on CC and MLO views. | 0.897 On CC views and 0.866 on MLO views. |
| Kaiser et al. ( | 8,150 | A multichannel architecture with transfer learning by VGG-Net. | 0.88 on all four views |
| Shi et al. ( | 322 | A light-weight deep learning architecture with 3 convolutional layers. | 0.836 On MLO views. |
| Deng et al. ( | 18,157 | A single channel architecture with transfer learning by Dense Net 121 combined with SE-Attention network. | 0.9179 on all Four views |
| Proposed method | 800 | A multichannel architecture with transfer learning with Dense Net 121 | 0.90 on Four views |