Mohamed Hussein1, Juana González-Bueno Puyal2, Peter Mountney3, Laurence B Lovat4, Rehan Haidry5. 1. Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Division of Surgery and Interventional Sciences, University College London, London W1W 7TY, United Kingdom. mohamed.hussein3@nhs.net. 2. Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, United Kingdom and Odin Vision, London W1W 7TS, United Kingdom. 3. Odin Vision, London W1W 7TS, United Kingdom. 4. Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Division of Surgery and Interventional Sciences, University College London, London W1W 7TY, United Kingdom. 5. Department of GI Services, University College London Hospital, London NW1 2BU, United Kingdom.
Core Tip: Computer aided diagnosis of oesophageal pathology may potentially be an adjunct for the endoscopist which will improve the detection of early neoplasia in Barrett’s oesophagus and early squamous neoplasia such that curative endoscopic therapy can be offered. There are significant miss rates of oesophageal cancers despite advances in endoscopic imaging modalities and an artificial intelligence (AI) tool will off-set human factors associated with some miss rates. To fulfil the potential of this exciting area of AI certain criteria need to be met which we will expand upon. Once implemented this will have a significant impact on this field of endoscopy.
INTRODUCTION
The past decade has seen significant advances in endoscopic imaging and optical enhancements to aid early diagnosis. Oesophageal cancer (adenocarcinoma and squamous cell carcinoma) is associated with significant mortality[1]. As of 2018 oesophageal cancer was ranked seventh in the world in terms of cancer incidence and mortality, with 572000 new cases[2]. Oesophageal squamous cell carcinoma accounts for more than 90% of oesophageal cancers in china with an overall 5-year survival rate less than 20%[3].Despite the technological advances there is still a treatment gap due to the underdiagnosis of lesions of the oesophagus[4]. A metanalysis of 24 studies showed that missed oesophageal cancers are found within a year of index endoscopy in a quarter of patients undergoing surveillance for Barrett’s oesophagus (BE)[5]. A large multicentre retrospective study of 123395 upper gastrointestinal (GI) endoscopies showed an overall missed oesophageal cancer rate of 6.4%. The interval between a negative endoscopy and the diagnosis was less than 2 years in most cases[6]. Multivariate analysis showed that one of the factors associated with the miss rate is a less experienced endoscopist.Efforts are necessary to improve the detection of early neoplasia secondary to BE and early squamous cell neoplasia (ESCN) such that curative minimally invasive endoscopic therapy can be offered to patients. Computer aided diagnosis may play an important role in the coming years in providing an adjunct to endoscopists in the early detection and diagnosis of early oesophageal cancers.In this review article we will review current advances in artificial intelligence in the oesophagus and future directions for development.
DEFINITIONS
Machine learning is the use of mathematical models to capture structure in data[7]. The algorithms improve automatically through experience and do not need to be explicitly programmed[8]. The final trained models can be used to make prediction of oesophageal diagnosis. Machine learning is classified into supervised and unsupervised learning. During supervised learning, the model is trained with data containing pairs of inputs and outputs. It learns how to map the inputs and outputs and applies this to unseen data. In unsupervised learning the algorithm is given data inputs which are not directly linked to the outputs and therefore has to formulate its own structure and set of patterns from the inputs[9].Deep learning is a subtype of machine learning in which the model, a neural network, is composed of several layers of neurons, similar to the human brain. This enables automatic learning of features, which is particularly useful in endoscopy where images and videos lack structure and are not easily processed into specific features[9]. A convolutional neural network (CNN) is a subtype of deep learning which can take an input endoscopic image and learn specific features (e.g., colour, size, pit pattern), process the complex information through many different layers and produce an output prediction (e.g., oesophageal dysplasia or no dysplasia) (Figure 1).
Figure 1
A deep learning model. Features of an endoscopic image processed through multiple neural layers to produce a predicted diagnosis of oesophageal cancer or no oesophageal cancer present on the image.
A deep learning model. Features of an endoscopic image processed through multiple neural layers to produce a predicted diagnosis of oesophageal cancer or no oesophageal cancer present on the image.To develop a machine learning model, data needs to be split into 3 independent groups-training set, validation set and testing set. The training set is used to build a model using the oesophageal labels (e.g., dysplasia or no dysplasia). The validation set provides an unbiased evaluation of the model’s skill whilst tuning the hyper-parameters of the model, for example, the number of layers in the neural network. It is used to ensure that the model is not overfitting to the training data. Overfitting means that the model will perform well on the training data but not on the unseen testing data. The test set is used to evaluate the performance of the predictive final model[7] (Figure 2).
Figure 2
Three independent data sets are required to create a machine learning model that can predict an oesophageal cancer diagnosis.
Three independent data sets are required to create a machine learning model that can predict an oesophageal cancer diagnosis.
ADVANCES IN ENDOSCOPIC IMAGING
Endoscopic imaging has advanced into a new era with the development of high definition digital technology. A charge coupled device chip in standard white light endoscopy produces an image signal of 10000 to 400000 pixels displayed in a standard definition format. The chips in a high definition white light endoscope produce image signals of 850000 to 1.3 million pixels displayed in high definition[10]. This has improved our ability to pick up the most subtle oesophageal mucosal abnormalities by assessing mucosal pit patterns and vascularity to allow a timely diagnosis of dysplasia or early cancer.There have been further advances in optical technology in the endoscope with chromoendoscopy such as narrow-band imaging (NBI), i-scan (Pentax, Hoya) and blue laser imaging (Fujinon), which have further improved early neoplasia detection and diagnosis in the oesophagus. Table 1 summarises some of the studies investigating the accuracy of these imaging modalities in detecting BE dysplasia by formulating classification systems based on mucosal pit pattern, colour and vascular architecture.
Table 1
Studies showing accuracy in the detection of Barrett’s oesophagus dysplasia for each endoscopic modality
I-scan optical enhancement
NBI
BLI
Ref.
Everson et al[11]
Sharma et al[12]
Subramaniam et al[13]
Features assessed
Mucosal pit pattern, vessels
Mucosal pit pattern, vessels
Colour, mucosal pit patterns, vessels
Accuracy
Experts = 84%, non-experts = 76%
85%
Experts = 95.2%, non-experts = 88.3%
Sensitivity
Experts = 77%, non-experts = 81%
80%
Experts = 96%, non-experts = 95.7%
Specificity
Experts = 92%, non-experts = 70%
88%
Experts = 94.4%, non-experts = 80.8%
NBI: Narrow-band imaging; BLI: Blue laser imaging.
Studies showing accuracy in the detection of Barrett’s oesophagus dysplasia for each endoscopic modalityNBI: Narrow-band imaging; BLI: Blue laser imaging.In squamous epithelium the microvascular vascular patterns of intrapapillary capillary loops (IPCL) is used to aid in the diagnosis of early squamous cell cancer (Figure 3)[14]. The classification systems that are currently used are based on magnification endoscopy assessment of IPCL patterns[15].
BE is the only identifiable premalignant condition associated with invasive oesophageal adenocarcinoma. There is a linear progression from non-dysplastic BE, to low grade and high-grade dysplasia. Early neoplasia which is confined to the mucosa have significant eradication rates of > 80%[16].The standard of care for endoscopic surveillance for patients with BE are random biopsies taken as part of the Seattle protocol where four-quadrant biopsies are taken every 2 cm of BE[17]. This method is not perfect and is associated with sampling error. The area of a 2 cm segment of BE is approximately 14 cm2, a single biopsy sample is approximately 0.125 cm2. Therefore, Seattle protocol biopsies will only cover 0.5 cm2 of the oesophageal mucosa which is 3.5% of the BE segment[18]. Dysplasia can often be focal and therefore easily missed. Studies have also shown that compliance with this protocol is poor and is worse on longer segments of BE[19].The American Society for Gastrointestinal Endoscopy preservation and incorporation of valuable endoscopic innovations (PIVI) initiative was developed to direct endoscopic technology development. Any imaging technology with targeted biopsies in BE would need to achieve a threshold per patient sensitivity of at least 90% for the detection of high-grade dysplasia and intramucosal cancer. It would require a specificity of at least 80% in BE in order to eliminate the requirement for random mucosal biopsies during BE endoscopic surveillance. This would improve the cost and effectiveness of a surveillance programme. This is the minimum target an AI technology would need to meet in order to be able to be ready for prime time and a possible adjunct during a BE surveillance endoscopy[20].An early study tested a computer algorithm developed based on 100 images from 44 patients with BE. It was trained using colour and texture filters. The algorithm diagnosed neoplastic lesions on a per image level with a sensitivity and specificity of 0.83. At the patient level a sensitivity and specificity of 0.86 and 0.87 was achieved respectively. This was the first study where a detection algorithm was developed for detecting BE lesions and compared with expert annotations[21].A recent study developed a hybrid ResNet-UNet model computer aided diagnosis system which classified images as containing neoplastic or non-dysplastic BE with a sensitivity and specificity of 90% and 88% respectively. It achieved higher accuracy than non-expert endoscopists[22].De Groof et al[23] performed one of the first studies to assess the accuracy of a computer-aided detection (CAD) system during live endoscopic procedures of 10 patients with BE Dysplasia and 10 patients without BE dysplasia. Three images were evaluated every 2 cm of BE by the CAD system. Sensitivity and specificity of the CAD system in per level analysis was 91% and 89% respectively (Figure 4).
Summary of all the studies investigating the development of machine learning algorithms for the detection of dysplasia in Barrett’s oesophagusBE: Barrett’s oesophagus; WLE: White light endoscopy; NBI: Narrow band imaging; VLE: Volumetric laser endomicroscopy; CNN: Convolutional neural network; CAD: Computer-aided detection.
ESCN
With advances in endoscopic therapy in recent years ESCN confined to the mucosal layer can be curatively resected endoscopically with a < 2% incidence of local lymph node metastasis. IPCL are the microvascular features which can be endoscopically used to help classify and identify ESCN and if there is a degree of invasion in the muscularis mucosa and submucosal tissue[16].Lugols chromoendoscopy is a screening method for identifying ESCN during an upper GI endoscopy. However, despite a sensitivity of > 90%, it is associated with a low specificity of approximately 70%[32]. There is also a risk of allergic reaction with iodine staining. Advanced endoscopic imaging with NBI has a high accuracy for detecting ESCN however a randomised control trial showed its specificity was approximately 50%[33]. Computer assisted detection systems have been developed to try and overcome many of these issues which aid endoscopists in detecting early ESCN lesions.Everson et al[16] developed a CNN trained with 7046 sequential high definition magnification endoscopy with NBI. These were classified by experts using the IPCL patterns and based on the Japanese Endoscopic Society classification. The CNN was able to accurately classify abnormal IPCL patterns with a sensitivity and specificity of 89% and 98% respectively. The diagnostic prediction times were between 26 and 37 ms (Figure 6).
WLE: White light endoscopy; NBI: Narrow band imaging; AI: Artificial intelligence; CNN: Convolutional neural network; IPCL: Intrapapillary capillary loops; BLI: Blue laser imaging; ESCN: Early squamous cell neoplasia; ECS: Endocytoscopic system; ME: Magnification endoscopy.
Summary of all the studies investigating the development of machine learning algorithms for the detection of early squamous cell neoplasiaWLE: White light endoscopy; NBI: Narrow band imaging; AI: Artificial intelligence; CNN: Convolutional neural network; IPCL: Intrapapillary capillary loops; BLI: Blue laser imaging; ESCN: Early squamous cell neoplasia; ECS: Endocytoscopic system; ME: Magnification endoscopy.
AI AND HISTOLOGY ANALYSIS IN OESOPHAGEAL CANCER
In digital pathology tissue slides are scanned as high-resolution images as each slide contains a large volume of cells. The cellular structure needs be visible to the histopathologist in order to identify areas of abnormality[43]. Histopathological analysis often requires a lot of time, high costs and often manual annotation of areas of interest by the histopathologists. There is also a possible miss rate of areas of early oesophageal dysplasia as the area can be focal. There is also suboptimal interobserver agreement among expert GI histopathologists in certain histological diagnosis such as low-grade dysplasia in BE[44].A novel AI system to detect and delineate areas of early oesophageal cancer on histology slides could be a key adjunct to histopathologists and help improve detection and delineation of early oesophageal cancer.Tomita et al[43] developed a convolutional attention-based mechanism to classify microscopic images into normal oesophageal tissue, BE with no dysplasia, BE with dysplasia and oesophageal adenocarcinoma using 123 histological images. Classification accuracy of the model was 0.85 in the BE-no dysplasia group, 0.89 in the BE with dysplasia group, and 0.88 in the oesophageal adenocarcinoma group.
ROLE OF AI IN QUALITY CONTROL IN THE OESOPHAGUS
The inspection time of the oesophagus and clear mucosal views have an impact on the quality of an oesophagoscopy and the yield of early oesophageal neoplasia detection. Assessment should take place with the oesophagus partially insufflated between peristaltic waves. An overly insufflated oesophagus can flatten a lesion which can in turn be missed[45]. The British Society of Gastroenterology consensus guidelines on the Quality of upper GI endoscopy recommends adequate mucosal visualisation achieved by a combination of aspiration, adequate air insufflation and use of mucosal cleansing techniques. They recommend that the quality of mucosal visualisation and the inspection time during a Barrett’s surveillance endoscopy should be reported[46].Chen et al[47] investigated their AI system, ENDOANGEL, which provides prompting of blind spots during upper GI endoscopy, informs the endoscopist of the inspection time and gives a grading score of the percentage of the mucosa that is visualised.
CONCLUSION
Computer aided diagnosis of oesophageal pathology may potentially be a key adjunct for the endoscopist which will improve the detection of early neoplasia in BE and ESCN such that curative endoscopic therapy can be offered. There are significant miss rates of oesophageal cancers despite advances in endoscopic imaging modalities and an AI tool will off-set the human factors associated with some of these miss rates.At the same time its key that AI systems avoid ‘overfitting’ where it performs well on training data but underperforms when exposed to new data. It needs to be able to detect early oesophageal cancer in low- and high-quality frames during real time endoscopy. This requires high volumes of both low- and high-quality training data tested on low- and high-quality testing data to reflect the real world setting during an endoscopy.Further research is required on the use of AI in quality control in the oesophagus in order to allow endoscopists to meet the quality indicators necessary during a surveillance endoscopy as set out in many of the international guidelines. This will ensure a minimum standard of endoscopy is met.Research in this area of AI is expanding and the future looks promising. To fulfil this potential the following is required: (1) Further development is needed to improve the performance of AI technology in the oesophagus to detect early cancer/dysplasia in BE or ESCN during real time endoscopy; (2) High quality clinical evidence from randomised control trials; and (3) Guidelines from clinical bodies or national institutes. Once implemented this will have a significant impact on this field of endoscopy.
Authors: Kavel Visrodia; Siddharth Singh; Rajesh Krishnamoorthi; David A Ahlquist; Kenneth K Wang; Prasad G Iyer; David A Katzka Journal: Gastroenterology Date: 2015-11-24 Impact factor: 22.682
Authors: Prateek Sharma; Jacques J G H M Bergman; Kenichi Goda; Mototsugu Kato; Helmut Messmann; Benjamin R Alsop; Neil Gupta; Prashanth Vennalaganti; Matt Hall; Vani Konda; Ann Koons; Olga Penner; John R Goldblum; Irving Waxman Journal: Gastroenterology Date: 2015-11-25 Impact factor: 22.682
Authors: M R Struyvenberg; F van der Sommen; A F Swager; A J de Groof; A Rikos; E J Schoon; J J Bergman; P H N de With; W L Curvers Journal: Dis Esophagus Date: 2020-03-05 Impact factor: 3.429
Authors: Prashanth Vennalaganti; Vijay Kanakadandi; John R Goldblum; Sharad C Mathur; Deepa T Patil; G Johan Offerhaus; Sybren L Meijer; Michael Vieth; Robert D Odze; Saligram Shreyas; Sravanthi Parasa; Neil Gupta; Alessandro Repici; Ajay Bansal; Titi Mohammad; Prateek Sharma Journal: Gastroenterology Date: 2016-11-03 Impact factor: 22.682
Authors: Prateek Sharma; Thomas J Savides; Marcia I Canto; Douglas A Corley; Gary W Falk; John R Goldblum; Kenneth K Wang; Michael B Wallace; Herbert C Wolfsen Journal: Gastrointest Endosc Date: 2012-08 Impact factor: 9.427
Authors: Albert J de Groof; Maarten R Struyvenberg; Joost van der Putten; Fons van der Sommen; Kiki N Fockens; Wouter L Curvers; Sveta Zinger; Roos E Pouw; Emmanuel Coron; Francisco Baldaque-Silva; Oliver Pech; Bas Weusten; Alexander Meining; Horst Neuhaus; Raf Bisschops; John Dent; Erik J Schoon; Peter H de With; Jacques J Bergman Journal: Gastroenterology Date: 2019-11-22 Impact factor: 22.682
Authors: Jacques Ferlay; Isabelle Soerjomataram; Rajesh Dikshit; Sultan Eser; Colin Mathers; Marise Rebelo; Donald Maxwell Parkin; David Forman; Freddie Bray Journal: Int J Cancer Date: 2014-10-09 Impact factor: 7.396
Authors: Sabina Beg; Krish Ragunath; Andrew Wyman; Matthew Banks; Nigel Trudgill; D Mark Pritchard; Stuart Riley; John Anderson; Helen Griffiths; Pradeep Bhandari; Phillip Kaye; Andrew Veitch Journal: Gut Date: 2017-08-18 Impact factor: 23.059
Authors: Mohamed Hussein; Juana González-Bueno Puyal; David Lines; Vinay Sehgal; Daniel Toth; Omer F Ahmad; Rawen Kader; Martin Everson; Gideon Lipman; Jacobo Ortiz Fernandez-Sordo; Krish Ragunath; Jose Miguel Esteban; Raf Bisschops; Matthew Banks; Michael Haefner; Peter Mountney; Danail Stoyanov; Laurence B Lovat; Rehan Haidry Journal: United European Gastroenterol J Date: 2022-05-06 Impact factor: 6.866