Literature DB >> 36116509

COUNTING MONKEYPOX LESIONS IN PATIENT PHOTOGRAPHS: LIMITS OF AGREEMENT OF MANUAL COUNTS AND ARTIFICIAL INTELLIGENCE.

Andrew J McNeil1, David W House2, Placide Mbala-Kingebeni3, Olivier Tshiani Mbaya4, Lori E Dodd5, Edward W Cowen6, Véronique Nussenblatt7, Tyler Bonnett8, Ziche Chen2, Inga Saknite9, Benoit M Dawant10, Eric R Tkaczyk11.   

Abstract

Entities:  

Year:  2022        PMID: 36116509      PMCID: PMC9534148          DOI: 10.1016/j.jid.2022.08.044

Source DB:  PubMed          Journal:  J Invest Dermatol        ISSN: 0022-202X            Impact factor:   7.590


× No keyword cloud information.

TO THE EDITOR

The extent of cutaneous involvement is a key aspect for diagnosis and monitoring monkeypox disease, which is considered the most important orthopox virus in humans (Sklenovska et al. 2018). The spread of monkeypox cases in Europe and North America in May 2022 raised global public health concern (Muyembe-Tamfum 2022), leading to the World Health Organization declaring a public health emergency on 23 July 2022 (Ghebreyesus 2022). Monkeypox affects the skin in >99% of cases (Pittman et al. 2022) with substantial morbidity. Current WHO guidelines assign severity according to the number of skin lesions: mild (<25 skin lesions), moderate (25—99 skin lesions), severe (100—250 skin lesions), or grave (>250 skin lesions) (Muyembe-Tamfum 2022) (Figure S1). Lesion counts are also a key parameter in monkeypox therapeutic trials. For example, the PALM007 randomized controlled trial of tecovirimat versus placebo requires counts lesions daily until resolution or day 28 (Nussenblatt 2022). Counting skin lesions manually is labor intensive and presents logistical challenges, especially in remote regions prone to monkeypox outbreaks. We sought to develop an artificial intelligence (AI) algorithm to count monkeypox lesions in patient photographs. We hypothesized that the AI would count lesions with close agreement to manual counts. We developed and tested the AI with a convenience series of photographs from an observational study, collected at the remote General Reference Hospital of Kole (Kole hospital) and the surrounding rainforest of the Congo River basin of the DRC. The observational study was a joint venture of the Institut National de Recherche Biomédicale and US Army Medical Research Institute of Infectious Diseases (USAMRIID), approved by the Human Use Committee at the USAMRIID (FY05-13), the Headquarters, United States Army Medical Research and Development Command Institutional Review Board (IRB), and the Ethics Committee at the University of Kinshasa School of Public Health. Initial clinical results and study population characteristics have been reported elsewhere (Mbala et al. 2017; Pittman et al. 2022). All patients provided written, informed consent and were confirmed to have monkeypox virus infection by PCR. Non-identifiable photographs were transferred to Vanderbilt University for use under local IRB approval (191042). From this set, all images amenable to unambiguous human counting were used for the AI training and testing. Photographs where counting in the field would not be performed (e.g., due to large confluent lesions or secondary infections), or where image quality prevented reasonable manual assessment (e.g., due to motion artifacts) were not used. The photograph set for analysis consisted of 66 photographs (median 3.5, interquartile range 2 to 4 photographs per patient) from eighteen patients (Figure S2). All patients were estimated as Fitzpatrick skin type VI by a board-certified dermatologist (ERT). Two types of manual annotations were collected for each photograph. First, rater 1 provided segmentation masks for AI training, where every pixel in the photograph was manually labelled as lesion or non-lesion. Second, manual lesion counts were collected for each photograph by three human raters (raters 1 – 3) separately. Manual lesion counts were collected prospectively on unannotated photographs (details in Supplement), without the raters knowing AI outputs. We consider the lesion counts by rater 1 as the ground truth given the greater familiarity and annotating experience with this dataset. This reference standard was selected since clinical adjudication in prospective clinical trials will likely be based on manual counts from photographs of enrolled patients. To identify and count lesions, we adopted a segmentation approach whereby every pixel in each photograph is classified as belonging to a monkeypox lesion or not. Our AI is based on the ubiquitous U-Net deep learning architecture (Ronneberger et al. 2015) with an Inception-v4 encoder (Szegedy et al. 2017). Prediction models were developed for each of the 18 patients in a leave-one-out experiment. For each model, lesion prediction maps were created for all photographs of the patient not seen during training. Lesion counts were estimated by the number of non-touching lesional areas in the prediction maps (details in Supplement). The primary clinical metric of interest was the lesion count performance, evaluated prospectively by comparing the predicted number of lesions for a given photograph to the ground truth number from rater 1. Simple linear regression and limits of agreement (LoA) (Bland-Altman) analysis were used to compare counts for each photograph. The width of 95% confidence LoA in this analysis is approximately four times the standard deviation of the difference between predicted lesions and ground truth. Figure 1a shows a representative image from Kole with manually identified lesions and AI output. Segmentation performance by the traditional computer vision metric of Dice index is shown in Figure S3. Performance in counting lesions by correlation and LoA analysis is shown in Figure 2. Relative to the ground truth counts (by rater 1), the AI had a mean bias of -5.86 (LoA width 68.85) lesions. For the remaining two human raters, the bias from ground truth was -3.24 (38.44) for rater 2, 9.68 (76.74) for rater 3, and 12.92 (81.91) between raters 3 and 2 (Figure 2 and Table S1). To demonstrate potential generalizability, we also applied the AI to publicly available images of monkeypox (Figure 1b).
Figure 1

Representative monkeypox AI lesion predictions on photos of previously unseen patients. (a) Two patients from our photograph set. Green contours show true positive lesions, blue shows false positive lesions (outlined by the AI but not the human), and magenta shows false negative lesions (outlined by human but not the AI). Unmarked photos are on the left. Upper photo AI lesion counts: 220 lesions, manual counts from three human raters: 239 (rater 1), 233 (rater 2), and 259 (rater 3). Lower photo AI lesion count: 131. Manual counts: 137 (rater 1), 134 (rater 2), and 143 (rater 3). (b) Two patients from publicly available photographs. Predicted lesion contours by our AI model are shown in yellow. The AI model is the same used to test Patient ID 15 (N=17, n=61). Upper photo from the CDC Public Health Image Library (Mahy 1997). AI lesion counts: 58, manual counts by rater 1: 52. Lower photo from the Nigeria Centre for Disease Control, recently made available on WHO website (NCDC 2022), used with permission. AI lesion counts: 26, manual counts by rater 1: 29. Written informed consent was obtained for research and publication of photos from all patients.

Figure 2

Comparison of lesion count performance by AI and human raters. Limits of agreement (LoA, shown with dashed lines) are the boundaries within which 95% of future measurement differences are expected to fall. LoA width = upper LoA – lower LoA. We also show the slope and coefficient of determination (R2) for the linear regression fit (red dashed line) between estimated counts for each pair. The solid black line is the line of agreement. (a) Bland-Altman and correlation plots for the AI against the ground truth (human rater 1). (b) Rater 2 against ground truth. (c) Rater 3 against ground truth.

Table S1

Summary of pairwise comparisons between different human raters and AI algorithm.

Rater PairBiasUpper LoALower LoALoA WidthSlopeR2
AI vs 1-5.8628.56-40.2968.850.780.94
2 vs 1-3.2415.98-22.4638.441.020.97
3 vs 19.6848.05-28.6976.741.070.92
3 vs 212.9253.88-28.0381.911.030.90
Representative monkeypox AI lesion predictions on photos of previously unseen patients. (a) Two patients from our photograph set. Green contours show true positive lesions, blue shows false positive lesions (outlined by the AI but not the human), and magenta shows false negative lesions (outlined by human but not the AI). Unmarked photos are on the left. Upper photo AI lesion counts: 220 lesions, manual counts from three human raters: 239 (rater 1), 233 (rater 2), and 259 (rater 3). Lower photo AI lesion count: 131. Manual counts: 137 (rater 1), 134 (rater 2), and 143 (rater 3). (b) Two patients from publicly available photographs. Predicted lesion contours by our AI model are shown in yellow. The AI model is the same used to test Patient ID 15 (N=17, n=61). Upper photo from the CDC Public Health Image Library (Mahy 1997). AI lesion counts: 58, manual counts by rater 1: 52. Lower photo from the Nigeria Centre for Disease Control, recently made available on WHO website (NCDC 2022), used with permission. AI lesion counts: 26, manual counts by rater 1: 29. Written informed consent was obtained for research and publication of photos from all patients. Comparison of lesion count performance by AI and human raters. Limits of agreement (LoA, shown with dashed lines) are the boundaries within which 95% of future measurement differences are expected to fall. LoA width = upper LoA – lower LoA. We also show the slope and coefficient of determination (R2) for the linear regression fit (red dashed line) between estimated counts for each pair. The solid black line is the line of agreement. (a) Bland-Altman and correlation plots for the AI against the ground truth (human rater 1). (b) Rater 2 against ground truth. (c) Rater 3 against ground truth. Despite the small training dataset, our AI performed at a comparable level to human raters counting monkeypox lesions. As monkeypox skin lesion counts are an important measure to stage and monitor disease severity, this approach could be used as a practical support tool in monkeypox trials that are imminently launching. A limitation of our study is the presence of a single skin type (Fitzpatrick type VI), which may hamper direct application in other skin types. Our set also lacked images of anogenital or perineal skin, which is an important emerging disease site in the European and North American outbreaks (Patel et al. 2022; Thornhill et al. 2022). Practical protocols to capture standardized, high-quality photographs of large body regions in resource-limited regions will be a critical next step for AI image analysis to support monkeypox research. Classifying lesion types may also enable more advanced differential diagnosis and monitoring, and objective confirmation of endpoints in monkeypox trails. Our cross-validation study of 18 monkeypox patients provides proof of principle for AI algorithms to provide reliable lesion identification and counting from photographs of patients with monkeypox. Ultimately, this could become a globally scalable solution to diagnose, stage, and monitor disease.

DATA AVAILABILITY

No public dataset is available due to the limited size of the study. Data and analyses are available upon reasonable request to the corresponding author.

CONFLICT OF INTEREST

The authors report no conflicts of interest.

Uncited reference

GhebreyesusMahyNCDC. Nigeria Centre for Disease Control, 2022.
  4 in total

1.  Monkeypox Virus Infection in Humans across 16 Countries - April-June 2022.

Authors:  John P Thornhill; Sapha Barkati; Sharon Walmsley; Juergen Rockstroh; Andrea Antinori; Luke B Harrison; Romain Palich; Achyuta Nori; Iain Reeves; Maximillian S Habibi; Vanessa Apea; Christoph Boesecke; Linos Vandekerckhove; Michal Yakubovsky; Elena Sendagorta; Jose L Blanco; Eric Florence; Davide Moschese; Fernando M Maltez; Abraham Goorhuis; Valerie Pourcher; Pascal Migaud; Sebastian Noe; Claire Pintado; Fabrizio Maggi; Ann-Brit E Hansen; Christian Hoffmann; Jezer I Lezama; Cristina Mussini; AnnaMaria Cattelan; Keletso Makofane; Darrell Tan; Silvia Nozza; Johannes Nemeth; Marina B Klein; Chloe M Orkin
Journal:  N Engl J Med       Date:  2022-07-21       Impact factor: 176.079

2.  Maternal and Fetal Outcomes Among Pregnant Women With Human Monkeypox Infection in the Democratic Republic of Congo.

Authors:  Placide K Mbala; John W Huggins; Therese Riu-Rovira; Steve M Ahuka; Prime Mulembakani; Anne W Rimoin; James W Martin; Jean-Jacques T Muyembe
Journal:  J Infect Dis       Date:  2017-10-17       Impact factor: 5.226

Review 3.  Emergence of Monkeypox as the Most Important Orthopoxvirus Infection in Humans.

Authors:  Nikola Sklenovská; Marc Van Ranst
Journal:  Front Public Health       Date:  2018-09-04

4.  Clinical features and novel presentations of human monkeypox in a central London centre during the 2022 outbreak: descriptive case series.

Authors:  Aatish Patel; Julia Bilinska; Jerry C H Tam; Dayana Da Silva Fontoura; Claire Y Mason; Anna Daunt; Luke B Snell; Jamie Murphy; Jack Potter; Cecilia Tuudah; Rohan Sundramoorthi; Movin Abeywickrema; Caitlin Pley; Vasanth Naidu; Gaia Nebbia; Emma Aarons; Alina Botgros; Sam T Douthwaite; Claire van Nispen Tot Pannerden; Helen Winslow; Aisling Brown; Daniella Chilton; Achyuta Nori
Journal:  BMJ       Date:  2022-07-28
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.