| Literature DB >> 35386837 |
Nathan R Huber1, Andrew D Missert1, Hao Gong1, Scott S Hsieh1, Shuai Leng1, Lifeng Yu1, Cynthia H McCollough1.
Abstract
In this study, we describe a systematic approach to optimize deep-learning-based image processing algorithms using random search. The optimization technique is demonstrated on a phantom-based noise reduction training framework; however, the techniques described can be applied generally for other deep learning image processing applications. The parameter space explored included number of convolutional layers, number of filters, kernel size, loss function, and network architecture (either U-Net or ResNet). A total of 100 network models were examined (50 random search, 50 ablation experiments). Following the random search, ablation experiments resulted in a very minor performance improvement indicating near optimal settings were found during the random search. The top performing network architecture was a U-Net with 4 pooling layers, 64 filters, 3×3 kernel size, ELU activation, and a weighted feature reconstruction loss (0.2×VGG + 0.8×MSE). Relative to the low-dose input image, the CNN reduced noise by 90%, reduced RMSE by 34%, and increased SSIM by 76% on six patient exams reserved for testing. The visualization of hepatic and bone lesions was greatly improved following noise reduction.Entities:
Keywords: Deep learning; Hyper-parameter optimization; Noise reduction; Random search
Year: 2021 PMID: 35386837 PMCID: PMC8982987 DOI: 10.1117/12.2582143
Source DB: PubMed Journal: Proc SPIE Int Soc Opt Eng ISSN: 0277-786X