| Literature DB >> 36238666 |
Avik Kuthiala1, Naman Tuli1, Harpreet Singh1, Omer F Boyraz2, Neeru Jindal1, Ravimohan Mavuduru3, Smita Pattanaik3, Prashant Singh Rana1.
Abstract
Arm Venous Segmentation plays a crucial role in smart venipuncture. The difficulties faced in locating veins for intravenous procedures can be diminished using computer vision for vein imaging. To facilitate this, a high-resolution dataset consisting of arm images was curated and has been presented in this study. Leveraging the ability of Near Infrared Imaging to easily detect veins, ambient lighting conditions were created inside a small enclosure to capture the images. The acquired images were annotated to create the corresponding masks for the dataset. To extend the scope and assert the usability of the dataset, the images, and corresponding masks were used to train an image segmentation model. In addition to using basic preprocessing and image augmentation based techniques, a U-Net based algorithmic architecture has been used to facilitate the task of segmentation. Subsequently, the results of performing image segmentation after applying the preprocessing methods have been compared using various evaluation metrics and have been visualised in the study. Furthermore, the possible applications of the presented dataset have been investigated in the study.Entities:
Mesh:
Year: 2022 PMID: 36238666 PMCID: PMC9553422 DOI: 10.1155/2022/4559219
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Image samples from the collected dataset.
Figure 2Data acquisition setup showing (a) Cardboard box with holes for arm and camera and (b) Camera setup mounted on raspberry Pi case.
Figure 3Sample image histogram.
Figure 4Resulting images after applying (b) AHE and (c) CLAHE techniques on the original image (a).
Figure 5Sequence of steps followed for applying contrast limited adaptive histogram equalization.
Figure 6Architecture of the U-Net model.
Default parameters of the mode.
| Parameters | Value |
|---|---|
| Image size | 384 |
| Learning rate | 1 |
| Epochs | 100 |
| Image colour mode | RGB |
| Mask colour mode | Greyscale |
| No. of convolutional blocks | 4 |
| No. of de-convolutional blocks | 4 |
Figure 7A schematic diagram illustrating the entire workflow.
Comparison of different augmentation methods on the basis of their PSNR, IoU, and dice scores.
| PSNR | IoU | Dice | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Training with: | Mean | Std | Min | Max | Mean | Std | Min | Max | Mean | Std | Min | Max |
| No augmentations | 0.435 | 0.146 | 0.068 | 0.678 | 0.682 | 0.01 | 0.552 | 0.961 | 0.397 | 0.11 | 0.067 | 0.517 |
| AHE | 0.582 | 0.165 | 0.045 | 0.827 | 0.788 | 0.007 | 0.659 | 0.973 | 0.48 | 0.141 | 0.045 | 0.708 |
| CLAHE | 0.624 | 0.165 | 0.097 | 0.886 | 0.79 | 0.005 | 0.718 | 0.98 | 0.545 | 0.146 | 0.097 | 0.799 |
| Both AHE & CLAHE |
| 0.155 | 0.117 | 0.93 |
| 0.004 | 0.821 | 0.996 |
| 0.149 | 0.117 | 0.871 |
Performance of the U-Net model with varying hyper-parameters.
| Epochs | Learning rate | Activation function | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Metric | 50 | 100 | 200 | 0.0001 | 0.0005 | 0.001 | Tanh | Sigmoid | ReLU |
| Dice coefficient | 0.52 | 0.685 | 0.678 | 0.685 | 0.572 | 0.542 | 0.532 | 0.468 | 0.685 |
| IoU | 0.818 | 0.893 | 0.895 | 0.893 | 0.853 | 0.847 | 0.826 | 0.784 | 0.893 |
| PSNR ratio | 0.619 | 0.751 | 0.741 | 0.751 | 0.596 | 0.583 | 0.581 | 0.505 | 0.751 |
Figure 8Input image, annotated ground truth, and predicted output from U-Net model for 3 sample images.
Figure 9Comparing the effect of test-time augmentations on 3 images based on their (a) Dice coefficient and (b) IoU.
Figure 10Image wise performance analysis using a confusion matrix.