| Literature DB >> 35069725 |
Sarang Sharma1, Sheifali Gupta1, Deepali Gupta1, Sapna Juneja2, Punit Gupta3, Gaurav Dhiman4, Sandeep Kautish5.
Abstract
Blood cell count is highly useful in identifying the occurrence of a particular disease or ailment. To successfully measure the blood cell count, sophisticated equipment that makes use of invasive methods to acquire the blood cell slides or images is utilized. These blood cell images are subjected to various data analyzing techniques that count and classify the different types of blood cells. Nowadays, deep learning-based methods are in practice to analyze the data. These methods are less time-consuming and require less sophisticated equipment. This paper implements a deep learning (D.L) model that uses the DenseNet121 model to classify the different types of white blood cells (WBC). The DenseNet121 model is optimized with the preprocessing techniques of normalization and data augmentation. This model yielded an accuracy of 98.84%, a precision of 99.33%, a sensitivity of 98.85%, and a specificity of 99.61%. The proposed model is simulated with four batch sizes (BS) along with the Adam optimizer and 10 epochs. It is concluded from the results that the DenseNet121 model has outperformed with batch size 8 as compared to other batch sizes. The dataset has been taken from the Kaggle having 12,444 images with the images of 3120 eosinophils, 3103 lymphocytes, 3098 monocytes, and 3123 neutrophils. With such results, these models could be utilized for developing clinically useful solutions that are able to detect WBC in blood cell images.Entities:
Mesh:
Year: 2022 PMID: 35069725 PMCID: PMC8769872 DOI: 10.1155/2022/7384131
Source DB: PubMed Journal: Comput Intell Neurosci
Comparison of existing state-of-art models.
| Citation/year of publishing | Reference | Approach | Objective |
|---|---|---|---|
| [ | CMaP | CNN | To implement a system to diagnosis acute leukaemia using WBC images |
| [ | ICPSC | VGG16, KNN, CNN | To implement transfer learning algorithm for the diagnosing and classifying WBC images |
| [ | Artificial cells, nanomedicine, and biotechnology | CNN, VGG16, VGG19, Inception-V3, ResNet-50 | To implement algorithm for TWO-DCNN for WBC classification |
| [ | The international conference on intelligent engineering and management | CNN, VGG16, VGG19, ResNet50, ResNet101 and inception V3 | To automatically classify sickle cell disease by using data augmentation techniques to yield better accuracy |
| [ | Biotechnology & biotechnological equipment | CNN and faster R-CNN | To implement deep learning method that identifies lymphoma cells from blood cells dataset using pre-trained networks |
| [ | IRBM | CNN, RNN and canonical correlation analysis (CCA). | To implement CCA method to observe the effect of overlapping nuclei |
| [ | Soft computing | CNN, ELM and MRMR algorithm. | To pre-train AlexNet, VGG16, GoogleNet, and ResNet as feature extractors and predict and classify blood cells |
| [ | CMaP | CNN, VGG16 | To implement a system for the classification of eight blood cells groups with high accuracy by using a transfer learning approach with convolutional neural networks |
| [ | The soft computing and signal processing | CNN, LeNet, VGG16, xception | To implement deep learning system by using CNN for classification of WBC |
| [ | JBaH | CNN, MGCNN | To implement a gabor wavelet and deep CNN named as MGCNN on medical hyper spectral imaging for blood cell classification |
DenseNet121 architecture.
| Model | Layers | Features (millions) | Size of input layer | Size of output layer |
|---|---|---|---|---|
| DenseNet121 | 121 | 8 | (224,224,3) | (4,1) |
Figure 1Illustration of the major functional blocks of DenseNet121 model.
White blood cell dataset description.
| Sr. no. | White blood cell | Number of training images | Number of validating images |
|---|---|---|---|
| 1 | E.P | 2497 | 623 |
| 2 | L.C | 2483 | 620 |
| 3 | M.C | 2478 | 620 |
| 4 | N.P | 2499 | 624 |
Figure 2White blood cell dataset: (a) E.P, (b) L.C, (c) M.C, and (d) N.P.
Figure 3Flipping data augmentation: (a) original, (b) horizontal flipping, and (c) vertical flipping.
Figure 4Clockwise rotation data augmentation: (a) original, (b) 90 degree anticlockwise, (c) 180 degree anticlockwise, and (d) 270 degree anticlockwise.
Figure 5Zooming data augmentation: (a) original image, (b) image with zooming factor 0.5, and (c) image with zooming factor 0.8.
Figure 6Brightness data augmentation: (a) original image, (b) image with brightness factor 0.2, and (c) image with brightness factor 0.4.
Sample images before and after data augmentation.
| Sr. no. | White blood cell | Number of images before augmentation | Number of images after augmentation |
|---|---|---|---|
| 1 | E.P | 3120 | 5010 |
| 2 | L.C | 3103 | 5003 |
| 3 | M.C | 3098 | 5017 |
| 4 | N.P | 3123 | 5020 |
DenseNet121 layers details.
| Convolutional block | Convolutional layers | Batch normalization | ReLu | Concatenated layer |
|---|---|---|---|---|
| CB1 | 1 | 1 | 1 | 0 |
| CB2 | 6 | 12 | 12 | 6 |
| CB3 | 12 | 24 | 24 | 12 |
| CB4 | 24 | 48 | 48 | 24 |
| CB5 | 16 | 32 | 32 | 16 |
Activation values of first two CNN layers.
| Layer (type) | Activation values in terms of output shape |
|---|---|
| CB1 | 112,112,64 |
| CB2_ BLOCK 1_0 | 56,56,64 |
| CB2_ BLOCK 1_1 | 56,56,128 |
| CB2_ BLOCK 2_0 | 56,56,96 |
| CB2_ BLOCK 2_1 | 56,56,128 |
| CB2_ BLOCK 3_0 | 56,56,128 |
| CB2_ BLOCK 3_1 | 56,56,128 |
| CB2_ BLOCK 4_0 | 56,56,160 |
| CB2_ BLOCK 4_1 | 56,56,128 |
| CB2_ BLOCK 5_0 | 56,56,192 |
| CB2_ BLOCK 5_1 | 56,56,128 |
| CB2_ BLOCK 6_0 | 56,56,224 |
| CB2_ BLOCK 6_1 | 56,56,128 |
Filter visualization for each convolution layers.
| Name of block | Filter for first convolution layer of corresponding block | Filter for last convolution layer of corresponding block |
|---|---|---|
| CB1 |
|
|
| CB2 |
|
|
| CB3 |
|
|
| CB4 |
|
|
| CB5 |
|
|
Images after each dense block.
| Name of block | Output image after first convolution layer of corresponding block | Output image after last convolution layer of corresponding block |
|---|---|---|
| CB1 |
|
|
| CB2 |
|
|
| CB3 |
|
|
| CB4 |
|
|
| CB5 |
|
|
Training performance of all BS.
| BS | Epoch | Train loss | Valid loss | Error rate | Valid Accuracy(%) |
|---|---|---|---|---|---|
| 8 | 1 | 0.753 | 0.376 | 0.153 | 84.7 |
| … | … | … | … | ||
| 9 | 0.175 | 0.052 | 0.017 | 98.34 | |
| 10 | 0.188 | 0.044 | 0.012 | 98.84 | |
|
| |||||
| 16 | 1 | 0.762 | 0.31 | 0.122 | 87.89 |
| … | … | … | … | ||
| 9 | 0.191 | 0.065 | 0.027 | 97.33 | |
| 10 | 0.152 | 0.037 | 0.013 | 98.79 | |
|
| |||||
| 32 | 1 | 0.845 | 0.381 | 0.153 | 84.73 |
| … | … | … | … | ||
| 9 | 0.198 | 0.071 | 0.026 | 97.43 | |
| 10 | 0.144 | 0.054 | 0.019 | 98.14 | |
|
| |||||
| 64 | 1 | 1.08 | 0.328 | 0.13 | 87.09 |
| … | … | … | … | ||
| 9 | 0.268 | 0.082 | 0.031 | 96.96 | |
| 10 | 0.195 | 0.073 | 0.027 | 97.38 | |
Figure 7Confusion matrix of DenseNet121 model with four batch sizes: (a) 8, (b) 16, (c) 32, and (d) 64.
Confusion matrix parameters of DenseNet121 with all batch sizes.
| Batch size | Disease category | Precision (%) | Sensitivity (%) | Specificity (%) | Kohen kappa | Overall Accuracy (%) |
|---|---|---|---|---|---|---|
| 8 | E.P | 96.56 | 98.76 | 98.87 | 0.9845 | 95.56 |
| L.C | 100 | 100 | 100 | |||
| M.C | 100 | 100 | 100 | |||
| N.P | 98.79 | 96.66 | 99.59 | |||
|
| ||||||
| 16 | E.P | 96.78 | 98.51 | 98.89 | 0.9838 | 95.8 |
| L.C | 99.79 | 100 | 99.93 | |||
| M.C | 100 | 100 | 100 | |||
| N.P | 98.65 | 96.65 | 99.56 | |||
|
| ||||||
| 32 | E.P | 95.13 | 97.65 | 98.36 | 0.9752 | 95.01 |
| L.C | 99.99 | 100 | 99.96 | |||
| M.C | 99.89 | 100 | 99.96 | |||
| N.P | 97.63 | 94.89 | 99.22 | |||
|
| ||||||
| 64 | E.P | 94.21 | 95.81 | 98.01 | 0.9651 | 95.38 |
| L.C | 99.79 | 99.89 | 99.93 | |||
| M.C | 100 | 99.69 | 100 | |||
| N.P | 95.62 | 94.8 | 98.55 | |||
Figure 8ROC area for Densenet121 for (a) 8 batch size and (b) 16 batch size.
Performance comparison of different batch sizes with Adam optimizer.
| Batch size | Average precision (%) | Average sensitivity (%) | Average specificity (%) | Accuracy (%) |
|---|---|---|---|---|
| 8 | 99.33 | 98.85 | 99.61 | 98.84 |
| 16 | 98.8 | 98.79 | 99.59 | 98.79 |
| 32 | 98.16 | 98.13 | 99.37 | 98.14 |
| 64 | 97.4 | 97.39 | 97.38 | 97.38 |
Figure 9Accuracy of DenseNet121 model.
Figure 10Learning rate vs. loss curve for the proposed model with (a) 8 batch size and (b) 16 batch size.
Figure 11Batches processed vs. loss curve for DenseNet121 with: (a) batch size 8 and (b) batch size16.
Comparison with existing state-of-art models.
| Study | Dataset source | No. of images | Technique used | Accuracy (%) |
|---|---|---|---|---|
| Boldú et al. [ | ImageNet | 16450 | DenseNet121 | 93.6 |
| Baby and Devaraj [ | ImageNet | 16450 | VGG16 | 82.35 |
| Yao et al. [ | Kaggle | 12444 | VGG16 | 95.7 |
| Sen et al. [ | HospitalSantiago de cube | 626 | InceptionV3 | 91 |
| Sheng et al. [ | MS COCO | 1673 | ResNet50 | 75.71 |
| Patil et al. [ | Kaggle | 12444 | Xception + LSTM | 95.89 |
| Özyurt [ | Kaggle | 12444 | AlexNet | 95.29 |
| Acevedo et al. [ | Hospital clinic of barcelona | 17092 | VGG16 | 96.2 |
| Sharma et al. [ | Kaggle | 12444 | LeNet | 87.93 |
| Huang et al. [ | LCTFS | 10000 | MGCNN | 97.65 |
| Proposed methodology | Kaggle | 12444 | DenseNet121 | 98.84 |