Jack Yu-Chuan Li1,2,3, Yao-Chin Wang4,5, Dina Nur Anggraini Ningrum1,6, Sheng-Po Yuan1,7, Woon-Man Kung8, Chieh-Chen Wu8, I-Shiang Tzeng8,9,10, Chu-Ya Huang11. 1. Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan. 2. Department Dermatology, Wan Fang Hospital, Taipei, Taiwan. 3. Taipei Medical University Research Center of Cancer Translational Medicine, Taipei, Taiwan. 4. Graduate Institute of Injury Prevention and Control, College of Public Health, Taipei Medical University, Taipei, Taiwan. 5. Department of Emergency Medicine, Min-Sheng General Hospital, Taoyuan, Taiwan. 6. Public Health Department, Universitas Negeri Semarang, Semarang City, Indonesia. 7. Department of Otorhinolaryngology, Shuang-Ho Hospital, Taipei Medical University, New Taipei City, Taiwan. 8. Department of Exercise and Health Promotion, College of Kinesiology and Health, Chinese Culture University, Taipei, Taiwan. 9. Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan. 10. Department of Statistics, National Taipei University, Taipei, Taiwan. 11. Taiwan College of Healthcare Executives, Taipei, Taiwan.
Abstract
BACKGROUND: Incidence of skin cancer is one of the global burdens of malignancies that increase each year, with melanoma being the deadliest one. Imaging-based automated skin cancer detection still remains challenging owing to variability in the skin lesions and limited standard dataset availability. Recent research indicates the potential of deep convolutional neural networks (CNN) in predicting outcomes from simple as well as highly complicated images. However, its implementation requires high-class computational facility, that is not feasible in low resource and remote areas of health care. There is potential in combining image and patient's metadata, but the study is still lacking. OBJECTIVE: We want to develop malignant melanoma detection based on dermoscopic images and patient's metadata using an artificial intelligence (AI) model that will work on low-resource devices. METHODS: We used an open-access dermatology repository of International Skin Imaging Collaboration (ISIC) Archive dataset consist of 23,801 biopsy-proven dermoscopic images. We tested performance for binary classification malignant melanomas vs nonmalignant melanomas. From 1200 sample images, we split the data for training (72%), validation (18%), and testing (10%). We compared CNN with image data only (CNN model) vs CNN for image data combined with an artificial neural network (ANN) for patient's metadata (CNN+ANN model). RESULTS: The balanced accuracy for CNN+ANN model was higher (92.34%) than the CNN model (73.69%). Combination of the patient's metadata using ANN prevents the overfitting that occurs in the CNN model using dermoscopic images only. This small size (24 MB) of this model made it possible to run on a medium class computer without the need of cloud computing, suitable for deployment on devices with limited resources. CONCLUSION: The CNN+ANN model can increase the accuracy of classification in malignant melanoma detection even with limited data and is promising for development as a screening device in remote and low resources health care.
BACKGROUND: Incidence of skin cancer is one of the global burdens of malignancies that increase each year, with melanoma being the deadliest one. Imaging-based automated skin cancer detection still remains challenging owing to variability in the skin lesions and limited standard dataset availability. Recent research indicates the potential of deep convolutional neural networks (CNN) in predicting outcomes from simple as well as highly complicated images. However, its implementation requires high-class computational facility, that is not feasible in low resource and remote areas of health care. There is potential in combining image and patient's metadata, but the study is still lacking. OBJECTIVE: We want to develop malignant melanoma detection based on dermoscopic images and patient's metadata using an artificial intelligence (AI) model that will work on low-resource devices. METHODS: We used an open-access dermatology repository of International Skin Imaging Collaboration (ISIC) Archive dataset consist of 23,801 biopsy-proven dermoscopic images. We tested performance for binary classification malignant melanomas vs nonmalignant melanomas. From 1200 sample images, we split the data for training (72%), validation (18%), and testing (10%). We compared CNN with image data only (CNN model) vs CNN for image data combined with an artificial neural network (ANN) for patient's metadata (CNN+ANN model). RESULTS: The balanced accuracy for CNN+ANN model was higher (92.34%) than the CNN model (73.69%). Combination of the patient's metadata using ANN prevents the overfitting that occurs in the CNN model using dermoscopic images only. This small size (24 MB) of this model made it possible to run on a medium class computer without the need of cloud computing, suitable for deployment on devices with limited resources. CONCLUSION: The CNN+ANN model can increase the accuracy of classification in malignant melanoma detection even with limited data and is promising for development as a screening device in remote and low resources health care.
Authors: Frank L Meyskens; Hasan Mukhtar; Cheryl L Rock; Jack Cuzick; Thomas W Kensler; Chung S Yang; Scott D Ramsey; Scott M Lippman; David S Alberts Journal: J Natl Cancer Inst Date: 2015-11-07 Impact factor: 13.506
Authors: Christina Fitzmaurice; Tomi F Akinyemiju; Faris Hasan Al Lami; Tahiya Alam; Reza Alizadeh-Navaei; Christine Allen; Ubai Alsharif; Nelson Alvis-Guzman; Erfan Amini; Benjamin O Anderson; Olatunde Aremu; Al Artaman; Solomon Weldegebreal Asgedom; Reza Assadi; Tesfay Mehari Atey; Leticia Avila-Burgos; Ashish Awasthi; Huda Omer Ba Saleem; Aleksandra Barac; James R Bennett; Isabela M Bensenor; Nickhill Bhakta; Hermann Brenner; Lucero Cahuana-Hurtado; Carlos A Castañeda-Orjuela; Ferrán Catalá-López; Jee-Young Jasmine Choi; Devasahayam Jesudas Christopher; Sheng-Chia Chung; Maria Paula Curado; Lalit Dandona; Rakhi Dandona; José das Neves; Subhojit Dey; Samath D Dharmaratne; David Teye Doku; Tim R Driscoll; Manisha Dubey; Hedyeh Ebrahimi; Dumessa Edessa; Ziad El-Khatib; Aman Yesuf Endries; Florian Fischer; Lisa M Force; Kyle J Foreman; Solomon Weldemariam Gebrehiwot; Sameer Vali Gopalani; Giuseppe Grosso; Rahul Gupta; Bishal Gyawali; Randah Ribhi Hamadeh; Samer Hamidi; James Harvey; Hamid Yimam Hassen; Roderick J Hay; Simon I Hay; Behzad Heibati; Molla Kahssay Hiluf; Nobuyuki Horita; H Dean Hosgood; Olayinka S Ilesanmi; Kaire Innos; Farhad Islami; Mihajlo B Jakovljevic; Sarah Charlotte Johnson; Jost B Jonas; Amir Kasaeian; Tesfaye Dessale Kassa; Yousef Saleh Khader; Ejaz Ahmad Khan; Gulfaraz Khan; Young-Ho Khang; Mohammad Hossein Khosravi; Jagdish Khubchandani; Jacek A Kopec; G Anil Kumar; Michael Kutz; Deepesh Pravinkumar Lad; Alessandra Lafranconi; Qing Lan; Yirga Legesse; James Leigh; Shai Linn; Raimundas Lunevicius; Azeem Majeed; Reza Malekzadeh; Deborah Carvalho Malta; Lorenzo G Mantovani; Brian J McMahon; Toni Meier; Yohannes Adama Melaku; Mulugeta Melku; Peter Memiah; Walter Mendoza; Tuomo J Meretoja; Haftay Berhane Mezgebe; Ted R Miller; Shafiu Mohammed; Ali H Mokdad; Mahmood Moosazadeh; Paula Moraga; Seyyed Meysam Mousavi; Vinay Nangia; Cuong Tat Nguyen; Vuong Minh Nong; Felix Akpojene Ogbo; Andrew Toyin Olagunju; Mahesh Pa; Eun-Kee Park; Tejas Patel; David M Pereira; Farhad Pishgar; Maarten J Postma; Farshad Pourmalek; Mostafa Qorbani; Anwar Rafay; Salman Rawaf; David Laith Rawaf; Gholamreza Roshandel; Saeid Safiri; Hamideh Salimzadeh; Juan Ramon Sanabria; Milena M Santric Milicevic; Benn Sartorius; Maheswar Satpathy; Sadaf G Sepanlou; Katya Anne Shackelford; Masood Ali Shaikh; Mahdi Sharif-Alhoseini; Jun She; Min-Jeong Shin; Ivy Shiue; Mark G Shrime; Abiy Hiruye Sinke; Mekonnen Sisay; Amber Sligar; Muawiyyah Babale Sufiyan; Bryan L Sykes; Rafael Tabarés-Seisdedos; Gizachew Assefa Tessema; Roman Topor-Madry; Tung Thanh Tran; Bach Xuan Tran; Kingsley Nnanna Ukwaja; Vasiliy Victorovich Vlassov; Stein Emil Vollset; Elisabete Weiderpass; Hywel C Williams; Nigus Bililign Yimer; Naohiro Yonemoto; Mustafa Z Younis; Christopher J L Murray; Mohsen Naghavi Journal: JAMA Oncol Date: 2018-11-01 Impact factor: 31.777