| Literature DB >> 28790867 |
Caroline Gallay1, Anne Girardet1, Manuela Viviano2, Rosa Catarino2, Anne-Caroline Benski2,3, Phuong Lien Tran2,4, Christophe Ecabert5, Jean-Philippe Thiran5, Pierre Vassilakos6, Patrick Petignat2.
Abstract
BACKGROUND: Visual inspection after application of acetic acid (VIA) and Lugol's iodine (VILI) is a cervical cancer (CC) screening approach that has recently been adopted in low- and middle-income countries (LMIC). Innovative technologies allow the acquisition of consecutive cervical images of VIA and VILI using a smartphone application. The aim of this study was to evaluate the quality of smartphone images in order to assess the feasibility and usability of a mobile application for CC screening in LMIC.Entities:
Keywords: HPV; cervical cancer screening; human papillomavirus; mobile health; smartphone; visual approach
Year: 2017 PMID: 28790867 PMCID: PMC5489054 DOI: 10.2147/IJWH.S136351
Source DB: PubMed Journal: Int J Womens Health ISSN: 1179-1411
Figure 1Study flowchart.
Abbreviations: HPV, human papillomavirus; HR, high risk; VIA, visual inspection after application of acetic acid; VILI, visual inspection after application of Lugol’s iodine.
Figure 2Device including the gynecological table, smartphone and tripod.
Figure 3Images’ acquisition flowchart.
Abbreviations: VIA, visual inspection after application of acetic acid; VILI, visual inspection after application of Lugol’s iodine.
Pictures fulfilling the four quality criteria of a total of 624 evaluations
| Sharpness | Focus | Zoom | Diagnostic utility | |
|---|---|---|---|---|
| Rater 1 | 143 (68.8) | 184 (88.5) | 149 (71.6) | 150 (72.1) |
| Rater 2 | 151 (72.6) | 166 (79.8) | 116 (55.8) | 166 (79.8) |
| Rater 3 | 191 (91.8) | 206 (99.0) | 195 (93.8) | 206 (99.0) |
| Total | 485 (77.7) | 556 (89.1) | 460 (73.7) | 522 (83.7) |
Evaluation of photo quality (two or more of the following criteria fulfilled: focus diagnostic utility, sharpness and zoom)
| Good quality photos | Poor quality photos | |
|---|---|---|
| Rater 1 | 177 (85.1) | 31 (14.9) |
| Rater 2 | 176 (84.6) | 32 (15.4) |
| Rater 3 | 206 (99.0) | 2 (1.0) |
| Consensus | 194 (93.3) | 14 (6.7) |
Agreement and kappa values between each rater and the consensus
| Obtained agreement (%) | Expected agreement (%) | κ | Standard error | ||
|---|---|---|---|---|---|
| Rater 1 | 91.83 | 80.37 | 0.58 | 0.06 | <0.001 |
| Rater 2 | 90.38 | 79.96 | 0.52 | 0.06 | <0.001 |
| Rater 3 | 94.23 | 92.44 | 0.23 | 0.05 | <0.001 |